国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

用c動(dòng)態(tài)數(shù)組(實(shí)現(xiàn)權(quán)重矩陣可視化)實(shí)現(xiàn)手?jǐn)]神經(jīng)網(wǎng)絡(luò)230902

這篇具有很好參考價(jià)值的文章主要介紹了用c動(dòng)態(tài)數(shù)組(實(shí)現(xiàn)權(quán)重矩陣可視化)實(shí)現(xiàn)手?jǐn)]神經(jīng)網(wǎng)絡(luò)230902。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

變量即內(nèi)存、指針使用的架構(gòu)原理:

1、用結(jié)構(gòu)struct記錄 網(wǎng)絡(luò)架構(gòu),如 float*** ws 為權(quán)重矩陣的指針(指針地址);

2、用 = (float*)malloc (Num * sizeof(float)) 給 具體變量分配內(nèi)存;

3、用 = (float**)malloc( Num* sizeof(float*) ) 給 指向 具體變量(一維數(shù)組)的指針…… 給分配 存放指針的變量……

……見代碼

// test22動(dòng)態(tài)數(shù)組22多維數(shù)組23三維隨機(jī)數(shù)230101.cpp : 此文件包含 "main" 函數(shù)。程序執(zhí)行將在此處開始并結(jié)束。

#include <iostream>
using namespace std;

typedef struct {
    float*** ws;
    int num1;
    float** layer_outputs;

}NeuralN;

//初始化 神經(jīng)網(wǎng)絡(luò)的 weights權(quán)重矩陣等
NeuralN init(int* t01, int num02) {
    NeuralN nn;
    nn.num1 = num02;

    nn.ws = (float***)malloc((num02 - 1) * sizeof(float**) );

    srand(time(NULL));

    cout << " [num02:" << num02 << endl;

    for (int i = 0; i <(num02 - 1); ++i) {
        nn.ws[i] = (float**)malloc( t01[i] * sizeof(float*) );  //為指針分配內(nèi)存
        for (int j = 0; j < t01[i]; ++j) {
            nn.ws[i][j] = (float*)malloc( t01[i  + 1  ] * sizeof(float) ); //為變量 分配內(nèi)存
            for (int k = 0; k < t01[i + 1]; k++) {
                //下一句 使用變量、即使用內(nèi)存!(使用變量的內(nèi)存)
                nn.ws[i][j][k] = (float)rand() / RAND_MAX;
            }//for330k
        }//for220j

    }//for110i

    return nn;

}//init

int main()
{
    int t001[] = { 2,8, 7,6, 1 ,2,1};

//#define Num4    4
    //用 for(ForEach)的方法,計(jì)數(shù)、數(shù)出 動(dòng)態(tài)數(shù)組長度
    int Len_t001 = 0; for (int ii : t001) { ++Len_t001; }

    int Numm = Len_t001;
    cout << "Numm:"<<Numm << endl;

    NeuralN nn = init(t001, Numm);// Num4);

    //
    //    for(float  ii: (nn.ws[0][1]) )
    //
    //顯示三維的 張量(即 三維數(shù)組 的 內(nèi)容)
    for (int i = 0; i < Numm - 1; ++i) {
//        nn.layer_outputs[i + 1] = (float*)malloc(t001[i + 1] * sizeof(float));
        printf("_{ i%d_", i);
        for (int j = 0; j < t001[i + 1]; ++j) {
//            nn.layer_outputs[i + 1][j] = 0;
            printf("[j%d", j);
            for (int k = 0; k < t001[i]; ++k) {

                printf("(k%d(%.1f,", k, nn.ws[i][k][j]);
            }//
            printf("_} \n");

        }//for220j
        printf("\n");
    }//for110i

    std::cout << "Hello World!\n";
}//main

第二版本231001

#include <stdio.h>
#include <windows.h>
#include <math.h>
#include <time.h>

#define LEARNING_RATE  0.05//0.05

// Sigmoid and its derivative
float sigmoid(float x) { return 1 / (1 + exp(-x));}

float sigmoid_derivative(float x) {
    //float sig = sigmoid(x);
    float sig = 1.0 / (exp(-x) + 1);
    return sig * (1 - sig);
}

typedef struct {
    float*** weights;
    int num_layers;
    int* layer_sizes;
    float** layer_outputs;
    float** deltas;
} NeuralNetwork;

NeuralNetwork initialize_nn(int* topology, int num_layers) {
    NeuralNetwork nn;
    nn.num_layers = num_layers;
    nn.layer_sizes = topology;

    // Allocate memory for weights, layer outputs, and deltas
    nn.weights = (float***)malloc((num_layers - 1) * sizeof(float**));
    nn.layer_outputs = (float**)malloc(num_layers * sizeof(float*));
    nn.deltas = (float**)malloc((num_layers - 1) * sizeof(float*));

    srand(time(NULL));
    for (int i = 0; i < num_layers - 1; i++) {
        nn.weights[i] = (float**)malloc(topology[i] * sizeof(float*));
        nn.deltas[i] = (float*)malloc(topology[i + 1] * sizeof(float));
        for (int j = 0; j < topology[i]; j++) {
            nn.weights[i][j] = (float*)malloc(topology[i + 1] * sizeof(float));
            for (int k = 0; k < topology[i + 1]; k++) {
                nn.weights[i][j][k] = ((float)rand() / RAND_MAX) * 2.0f - 1.0f;  // Random weights between -1 and 1
            }
        }//for220j
    }//for110i
    return nn;
}//NeuralNetwork initialize_nn

float* feedforward(NeuralNetwork* nn, float* input) {
    nn->layer_outputs[0] = input;
    for (int i = 0; i < nn->num_layers - 1; i++) {
        nn->layer_outputs[i + 1] = (float*)malloc(nn->layer_sizes[i + 1] * sizeof(float));
        for (int j = 0; j < nn->layer_sizes[i + 1]; j++) {
            nn->layer_outputs[i + 1][j] = 0;
            for (int k = 0; k < nn->layer_sizes[i]; k++) {
//                int A01 = 01;
                nn->layer_outputs[i + 1][j] += nn->layer_outputs[i][k] * nn->weights[i][k][j];

            }//for330k
            nn->layer_outputs[i + 1][j] = sigmoid(nn->layer_outputs[i + 1][j]);
        }//for220j
    }//for110i
    return nn->layer_outputs[nn->num_layers - 1];
}//feedforward


void feedLoss(NeuralNetwork* nn, float* target) {

    //顯示權(quán)重矩陣:
    //nn->layer_outputs[0] = input;
    for (int i = 0; i < nn->num_layers - 1; i++) {
        nn->layer_outputs[i + 1] = (float*)malloc(nn->layer_sizes[i + 1] * sizeof(float));
        for (int j = 0; j < nn->layer_sizes[i + 1]; j++) {
            nn->layer_outputs[i + 1][j] = 0;
            for (int k = 0; k < nn->layer_sizes[i]; k++) {
                //                int A01 = 01;
                //nn->layer_outputs[i + 1][j] += nn->layer_outputs[i][k] * nn->weights[i][k][j];
                if (0 < nn->weights[i][k][j]) { SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_BLUE); // FOREROUND_RED);
            }
                else { SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_RED);   // BLUE);
        }
                        printf("(%.4f,", nn->weights[i][k][j]);
                //    A01 = 22;
            }
            printf("] \n");
            nn->layer_outputs[i + 1][j] = sigmoid(nn->layer_outputs[i + 1][j]);
        }//for220j
        SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE);
        printf("};\n");
    }//for110i
    printf("_]};\n \n");
    //

    int Last01 = nn->num_layers - 1;
    // Calculate output layer deltas
    for (int i = 0; i < nn->layer_sizes[Last01]; ++i ) {
        float error = target[i] - nn->layer_outputs[Last01][i];
            printf("[i%d:%f]  ", i, error);
//        nn->deltas[Last01 - 1][i] = error * sigmoid_derivative(nn->layer_outputs[Last01][i]);
    }

    // Calculate hidden layer deltas

}//backpropagate(NeuralNetwork* nn, float* target


void backpropagate(NeuralNetwork* nn, float* target) {
    int Last01 = nn->num_layers - 1;

    // Calculate output layer deltas//計(jì)算輸出層變化
    for (int i = 0; i < nn->layer_sizes[Last01]; i++) {
        float error = target[i] - nn->layer_outputs[Last01][i];
        nn->deltas[Last01 - 1][i] = error * sigmoid_derivative(nn->layer_outputs[Last01][i]);
    }

    // Calculate hidden layer deltas//計(jì)算隱藏層變化
    for (int i = Last01 - 1; i > 0; i--) {
        for (int j = 0; j < nn->layer_sizes[i]; j++) {
            float sum = 0;
            for (int k = 0; k < nn->layer_sizes[i + 1]; k++) {
                sum += nn->weights[i][j][k] * nn->deltas[i][k];
            }
            nn->deltas[i - 1][j] = sum * sigmoid_derivative(nn->layer_outputs[i][j]);
        }
    }

    // Adjust weights
    for (int i = 0; i < Last01; i++) {
        for (int j = 0; j < nn->layer_sizes[i]; j++) {
            for (int k = 0; k < nn->layer_sizes[i + 1]; k++) {
                nn->weights[i][j][k] += LEARNING_RATE * nn->deltas[i][k] * nn->layer_outputs[i][j];
            }
        }
    }//
}//backpropagate(NeuralNetwork* nn, float* target

void train(NeuralNetwork* nn, float inputs[][2], float* targets, int num_samples, int num_epochs) {
    float* outputs;
    bool whetherOutputLoss = 0;
#define Num10000 100000
    for (int epoch = 0; epoch < num_epochs; epoch++) {
        if (0 == (epoch % Num10000)  ) { whetherOutputLoss = 1; }
        for (int i = 0; i < num_samples; i++) {
            //float* outputs = 
            feedforward(nn, inputs[i]);
            //
            if (whetherOutputLoss) { feedLoss(nn, &targets[i]); } //當(dāng)抽樣時(shí)機(jī)到的時(shí)候,才顯示
            //
            backpropagate(nn, &targets[i]);
        }//
        if (whetherOutputLoss) {printf("\n");
                                whetherOutputLoss = 0;
                                }

    }//for110i
}//void train

int main() {
//    int topology[] = { 2, 4, 3, 1 };
//    NeuralNetwork nn = initialize_nn(topology, 4);

#define numLayer5   4
    //5
    //9
    //6
    //7
    int topology[] = { 2, /*128,*/ /*64,*/ /*32,*/ /*16,*/  /*8,*/ 3, 2, 1 };
    //                  1, 2,   3, 4,   5,  6, 7, 8, 9
    NeuralNetwork nn = initialize_nn(topology, numLayer5);  // 4);

#define Num4 4
    float inputs[Num4][2] = { {1, 1}, {0, 0}, {1, 0}, {0, 1} };
    float targets[Num4] = { 0, 0, 1, 1 };

#define Num200000 200000
//    train(&nn, inputs, targets, 4, 10000);
    train(&nn, inputs, targets, Num4, Num200000);

//#define Num4 4

    float test_inputs[Num4][2] = { {0,0}, {1, 0}, {1, 1}, {0, 1} };
    for (int i = 0; i < Num4; i++) {
        float* output = feedforward(&nn, test_inputs[i]);
        printf("Output for [%f, %f]: %f\n", test_inputs[i][0], test_inputs[i][1], output[0]);
        free(output);
    }

    // Free memory
    for (int i = 0; i < nn.num_layers - 1; i++) {
        for (int j = 0; j < nn.layer_sizes[i]; j++) {
            free(nn.weights[i][j]);
        }
        free(nn.weights[i]);
        free(nn.deltas[i]);
    }
    free(nn.weights);
    free(nn.deltas);
    free(nn.layer_outputs);

    return 0;
}//main

第一版本230901文章來源地址http://www.zghlxwxcb.cn/news/detail-727548.html

#include <stdio.h>
#include <windows.h>
//#include <stdlib.h>
#include <math.h>
#include <time.h>

#define LEARNING_RATE  0.05
//0.05

// Sigmoid and its derivative
float sigmoid(float x) { return 1 / (1 + exp(-x));}

float sigmoid_derivative(float x) {
    //float sig = sigmoid(x);
    float sig = 1.0 / (exp(-x) + 1);
    return sig * (1 - sig);
}

typedef struct {
    float*** weights;
    int num_layers;
    int* layer_sizes;
    float** layer_outputs;
    float** deltas;
} NeuralNetwork;

NeuralNetwork initialize_nn(int* topology, int num_layers) {
    NeuralNetwork nn;
    nn.num_layers = num_layers;
    nn.layer_sizes = topology;

    // Allocate memory for weights, layer outputs, and deltas
    nn.weights = (float***)malloc((num_layers - 1) * sizeof(float**));
    nn.layer_outputs = (float**)malloc(num_layers * sizeof(float*));
    nn.deltas = (float**)malloc((num_layers - 1) * sizeof(float*));

    srand(time(NULL));
    for (int i = 0; i < num_layers - 1; i++) {
        nn.weights[i] = (float**)malloc(topology[i] * sizeof(float*));
        nn.deltas[i] = (float*)malloc(topology[i + 1] * sizeof(float));
        for (int j = 0; j < topology[i]; j++) {
            nn.weights[i][j] = (float*)malloc(topology[i + 1] * sizeof(float));
            for (int k = 0; k < topology[i + 1]; k++) {
                nn.weights[i][j][k] = ((float)rand() / RAND_MAX) * 2.0f - 1.0f;  // Random weights between -1 and 1
            }
        }//for220j
    }//for110i
    return nn;
}//NeuralNetwork initialize_nn

float* feedforward(NeuralNetwork* nn, float* input) {
    nn->layer_outputs[0] = input;
    for (int i = 0; i < nn->num_layers - 1; i++) {
        nn->layer_outputs[i + 1] = (float*)malloc(nn->layer_sizes[i + 1] * sizeof(float));
        for (int j = 0; j < nn->layer_sizes[i + 1]; j++) {
            nn->layer_outputs[i + 1][j] = 0;
            for (int k = 0; k < nn->layer_sizes[i]; k++) {
//                int A01 = 01;
                nn->layer_outputs[i + 1][j] += nn->layer_outputs[i][k] * nn->weights[i][k][j];
            //    A01 = 22;
            }
            nn->layer_outputs[i + 1][j] = sigmoid(nn->layer_outputs[i + 1][j]);
        }//for220j
    }//for110i
    return nn->layer_outputs[nn->num_layers - 1];
}//feedforward


void feedLoss(NeuralNetwork* nn, float* target) {

    //顯示權(quán)重矩陣:
    //nn->layer_outputs[0] = input;
    for (int i = 0; i < nn->num_layers - 1; i++) {
        nn->layer_outputs[i + 1] = (float*)malloc(nn->layer_sizes[i + 1] * sizeof(float));
        for (int j = 0; j < nn->layer_sizes[i + 1]; j++) {
            nn->layer_outputs[i + 1][j] = 0;
            for (int k = 0; k < nn->layer_sizes[i]; k++) {
                //                int A01 = 01;
                //nn->layer_outputs[i + 1][j] += nn->layer_outputs[i][k] * nn->weights[i][k][j];
                if (0 < nn->weights[i][k][j]) { SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_BLUE); // FOREROUND_RED);
            }
                else { SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_RED);   // BLUE);
        }
                        printf("(%.4f,", nn->weights[i][k][j]);
                //    A01 = 22;
            }
            printf("] \n");
            nn->layer_outputs[i + 1][j] = sigmoid(nn->layer_outputs[i + 1][j]);
        }//for220j
        SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE);
        printf("};\n");
    }//for110i
    printf("_]};\n");
    //

    int Last01 = nn->num_layers - 1;
    // Calculate output layer deltas
    for (int i = 0; i < nn->layer_sizes[Last01]; ++i ) {
        float error = target[i] - nn->layer_outputs[Last01][i];
            printf("[i%d:%f]  ", i, error);
//        nn->deltas[Last01 - 1][i] = error * sigmoid_derivative(nn->layer_outputs[Last01][i]);
    }

    // Calculate hidden layer deltas

}//backpropagate(NeuralNetwork* nn, float* target


void backpropagate(NeuralNetwork* nn, float* target) {
    int Last01 = nn->num_layers - 1;

    // Calculate output layer deltas//計(jì)算輸出層變化
    for (int i = 0; i < nn->layer_sizes[Last01]; i++) {
        float error = target[i] - nn->layer_outputs[Last01][i];
        nn->deltas[Last01 - 1][i] = error * sigmoid_derivative(nn->layer_outputs[Last01][i]);
    }

    // Calculate hidden layer deltas//計(jì)算隱藏層變化
    for (int i = Last01 - 1; i > 0; i--) {
        for (int j = 0; j < nn->layer_sizes[i]; j++) {
            float sum = 0;
            for (int k = 0; k < nn->layer_sizes[i + 1]; k++) {
                sum += nn->weights[i][j][k] * nn->deltas[i][k];
            }
            nn->deltas[i - 1][j] = sum * sigmoid_derivative(nn->layer_outputs[i][j]);
        }
    }

    // Adjust weights
    for (int i = 0; i < Last01; i++) {
        for (int j = 0; j < nn->layer_sizes[i]; j++) {
            for (int k = 0; k < nn->layer_sizes[i + 1]; k++) {
                nn->weights[i][j][k] += LEARNING_RATE * nn->deltas[i][k] * nn->layer_outputs[i][j];
            }
        }
    }//
}//backpropagate(NeuralNetwork* nn, float* target

void train(NeuralNetwork* nn, float inputs[][2], float* targets, int num_samples, int num_epochs) {
    float* outputs;
    bool whetherOutputLoss = 0;
#define Num10000 50000
    for (int epoch = 0; epoch < num_epochs; epoch++) {
        if (0 == (epoch % Num10000)  ) { whetherOutputLoss = 1; }
        for (int i = 0; i < num_samples; i++) {
            //float* outputs = 
            feedforward(nn, inputs[i]);
            //
            if (whetherOutputLoss) { feedLoss(nn, &targets[i]); }
            //
            backpropagate(nn, &targets[i]);
        }//
        if (whetherOutputLoss) {printf("\n");
                                whetherOutputLoss = 0;
                                }

    }//for110i
}//void train

int main() {
//    int topology[] = { 2, 4, 3, 1 };
//    NeuralNetwork nn = initialize_nn(topology, 4);

#define numLayer5   4
    //5
    //9
    //6
    //7
    int topology[] = { 2, /*128,*/ /*64,*/ /*32,*/ /*16,*/  /*8,*/ 3, 2, 1 };
    //                  1, 2,   3, 4,   5,  6, 7, 8, 9
    NeuralNetwork nn = initialize_nn(topology, numLayer5);  // 4);

#define Num4 4
    float inputs[Num4][2] = { {1, 1}, {0, 0}, {1, 0}, {0, 1} };
    float targets[Num4] = { 0, 0, 1, 1 };

#define Num200000 200000
//    train(&nn, inputs, targets, 4, 10000);
    train(&nn, inputs, targets, Num4, Num200000);

//#define Num4 4

    float test_inputs[Num4][2] = { {0,0}, {1, 0}, {1, 1}, {0, 1} };
    for (int i = 0; i < Num4; i++) {
        float* output = feedforward(&nn, test_inputs[i]);
        printf("Output for [%f, %f]: %f\n", test_inputs[i][0], test_inputs[i][1], output[0]);
        free(output);
    }

    // Free memory
    for (int i = 0; i < nn.num_layers - 1; i++) {
        for (int j = 0; j < nn.layer_sizes[i]; j++) {
            free(nn.weights[i][j]);
        }
        free(nn.weights[i]);
        free(nn.deltas[i]);
    }
    free(nn.weights);
    free(nn.deltas);
    free(nn.layer_outputs);

    return 0;
}//main
(-0.1291,(0.7803,]
(-0.6326,(0.5078,]
};
(-0.1854,(-0.5262,(0.8464,]
(0.4913,(0.0774,(0.1000,]
};
(0.7582,(-0.7756,]
};
_]};
[i0:-0.500000]  (0.5459,(0.0427,]
(-0.1289,(0.7804,]
(-0.6327,(0.5076,]
};
(-0.1859,(-0.5268,(0.8458,]
(0.4919,(0.0780,(0.1005,]
};
(0.7553,(-0.7786,]
};
_]};
[i0:-0.500000]  (0.5459,(0.0427,]
(-0.1289,(0.7804,]
(-0.6327,(0.5076,]
};
(-0.1864,(-0.5273,(0.8453,]
(0.4924,(0.0785,(0.1011,]
};
(0.7524,(-0.7815,]
};
_]};
[i0:0.500000]  (0.5458,(0.0427,]
(-0.1291,(0.7804,]
(-0.6326,(0.5076,]
};
(-0.1859,(-0.5268,(0.8458,]
(0.4919,(0.0780,(0.1005,]
};
(0.7553,(-0.7786,]
};
_]};
[i0:0.500000]
(0.5679,(-0.3593,]
(-0.8321,(1.1025,]
(-0.5647,(0.1703,]
};
(-0.5384,(-1.1479,(0.8445,]
(0.2658,(0.1725,(-0.1653,]
};
(1.1137,(-0.7693,]
};
_]};
[i0:-0.500000]  (0.5682,(-0.3590,]
(-0.8317,(1.1029,]
(-0.5651,(0.1699,]
};
(-0.5391,(-1.1487,(0.8437,]
(0.2663,(0.1730,(-0.1647,]
};
(1.1107,(-0.7722,]
};
_]};
[i0:-0.500000]  (0.5682,(-0.3590,]
(-0.8317,(1.1029,]
(-0.5651,(0.1699,]
};
(-0.5399,(-1.1495,(0.8429,]
(0.2668,(0.1735,(-0.1642,]
};
(1.1078,(-0.7751,]
};
_]};
[i0:0.500000]  (0.5679,(-0.3590,]
(-0.8321,(1.1029,]
(-0.5647,(0.1699,]
};
(-0.5391,(-1.1487,(0.8437,]
(0.2663,(0.1730,(-0.1647,]
};
(1.1107,(-0.7722,]
};
_]};
[i0:0.500000]
(6.5241,(-6.2462,]
(-6.5361,(6.8406,]
(0.2226,(0.6834,]
};
(-3.2613,(-3.6355,(2.0290,]
(0.8144,(0.6639,(-0.7503,]
};
(4.2499,(-0.6959,]
};
_]};
[i0:-0.500000]  (6.5288,(-6.2415,]
(-6.5309,(6.8458,]
(0.2196,(0.6804,]
};
(-3.2642,(-3.6385,(2.0261,]
(0.8149,(0.6644,(-0.7498,]
};
(4.2469,(-0.6989,]
};
_]};
[i0:-0.500000]  (6.5288,(-6.2415,]
(-6.5309,(6.8458,]
(0.2196,(0.6804,]
};
(-3.2671,(-3.6414,(2.0231,]
(0.8154,(0.6649,(-0.7494,]
};
(4.2440,(-0.7018,]
};
_]};
[i0:0.500000]  (6.5241,(-6.2415,]
(-6.5361,(6.8458,]
(0.2226,(0.6804,]
};
(-3.2642,(-3.6385,(2.0260,]
(0.8149,(0.6644,(-0.7498,]
};
(4.2469,(-0.6989,]
};
_]};
[i0:0.500000]
(114.9971,(-113.4876,]
(-112.8603,(114.3747,]
(0.6990,(0.7116,]
};
(-31.6319,(-31.7725,(45.2379,]
(11.9645,(11.6226,(-25.5372,]
};
(22.2722,(-15.6809,]
};
_]};
[i0:-0.500000]  (115.2866,(-113.1981,]
(-112.5715,(114.6635,]
(0.2422,(0.2548,]
};
(-31.6473,(-31.7879,(45.2226,]
(11.9753,(11.6335,(-25.5264,]
};
(22.2693,(-15.6838,]
};
_]};
[i0:-0.500000]  (115.2866,(-113.1981,]
(-112.5715,(114.6635,]
(0.2422,(0.2548,]
};
(-31.6626,(-31.8033,(45.2072,]
(11.9861,(11.6443,(-25.5155,]
};
(22.2663,(-15.6867,]
};
_]};
[i0:0.500000]  (114.9968,(-113.1981,]
(-112.8605,(114.6635,]
(0.6987,(0.2548,]
};
(-31.6473,(-31.7879,(45.2226,]
(11.9753,(11.6335,(-25.5264,]
};
(22.2693,(-15.6838,]
};
_]};
[i0:0.500000]
Output for [0.000000, 0.000000]: 0.005787
Output for [1.000000, 0.000000]: 0.993864
Output for [1.000000, 1.000000]: 0.011066
Output for [0.000000, 1.000000]: 0.993822

到了這里,關(guān)于用c動(dòng)態(tài)數(shù)組(實(shí)現(xiàn)權(quán)重矩陣可視化)實(shí)現(xiàn)手?jǐn)]神經(jīng)網(wǎng)絡(luò)230902的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 使用JavaScript實(shí)現(xiàn)復(fù)雜功能:動(dòng)態(tài)數(shù)據(jù)可視化的構(gòu)建

    在前端開發(fā)中,JavaScript無疑是最核心的技術(shù)之一。它能夠處理各種交互邏輯,實(shí)現(xiàn)復(fù)雜的功能。本文將通過一個(gè)動(dòng)態(tài)數(shù)據(jù)可視化的案例,展示如何使用JavaScript實(shí)現(xiàn)復(fù)雜功能。動(dòng)態(tài)數(shù)據(jù)可視化能夠?qū)⒋罅繑?shù)據(jù)以直觀、生動(dòng)的方式呈現(xiàn),幫助用戶更好地理解和分析數(shù)據(jù)。 準(zhǔn)備工

    2024年02月20日
    瀏覽(32)
  • 玩轉(zhuǎn)視圖變量,輕松實(shí)現(xiàn)動(dòng)態(tài)可視化數(shù)據(jù)分析

    玩轉(zhuǎn)視圖變量,輕松實(shí)現(xiàn)動(dòng)態(tài)可視化數(shù)據(jù)分析

    在當(dāng)今數(shù)據(jù)驅(qū)動(dòng)的世界中,數(shù)據(jù)分析已經(jīng)成為了企業(yè)和組織中不可或缺的一部分。傳統(tǒng)的靜態(tài)數(shù)據(jù)分析方法往往無法滿足快速變化的業(yè)務(wù)需求和實(shí)時(shí)決策的要求。為了更好地應(yīng)對(duì)這些挑戰(zhàn),觀測(cè)云的動(dòng)態(tài)可視化數(shù)據(jù)分析應(yīng)運(yùn)而生。 在動(dòng)態(tài)可視化數(shù)據(jù)分析中,聯(lián)動(dòng)視圖變量起到

    2024年02月08日
    瀏覽(18)
  • 簡(jiǎn)單的用Python抓取動(dòng)態(tài)網(wǎng)頁數(shù)據(jù),實(shí)現(xiàn)可視化數(shù)據(jù)分析

    簡(jiǎn)單的用Python抓取動(dòng)態(tài)網(wǎng)頁數(shù)據(jù),實(shí)現(xiàn)可視化數(shù)據(jù)分析

    一眨眼明天就周末了,一周過的真快! 今天咱們用Python來實(shí)現(xiàn)一下動(dòng)態(tài)網(wǎng)頁數(shù)據(jù)的抓取 最近不是有消息說世界首富馬上要變成中國人了嗎,這要真成了,可就是歷史上首位中國世界首富了! 那我們就以富豪排行榜為例,爬取一下2023年國內(nèi)富豪五百強(qiáng),最后實(shí)現(xiàn)一下可視化分

    2024年02月05日
    瀏覽(24)
  • 關(guān)于微信小程序中如何實(shí)現(xiàn)數(shù)據(jù)可視化-echarts動(dòng)態(tài)渲染

    關(guān)于微信小程序中如何實(shí)現(xiàn)數(shù)據(jù)可視化-echarts動(dòng)態(tài)渲染

    移動(dòng)端設(shè)備中,難免會(huì)涉及到數(shù)據(jù)的可視化展示、數(shù)據(jù)統(tǒng)計(jì)等等,本篇主要講解原生微信小程序中嵌入 echarts 并進(jìn)行動(dòng)態(tài)渲染,實(shí)現(xiàn)數(shù)據(jù)可視化功能。 基礎(chǔ)使用 首先在 GitHub 上下載 echarts 包 地址:https://github.com/ecomfe/echarts-for-weixin/tree/master 下載項(xiàng)目 解壓壓縮包,將 ec-canva

    2024年01月25日
    瀏覽(220)
  • 混淆矩陣——矩陣可視化

    混淆矩陣——矩陣可視化

    相關(guān)文章 混淆矩陣——評(píng)估指標(biāo)計(jì)算 混淆矩陣——評(píng)估指標(biāo)可視化 正例是指在分類問題中,被標(biāo)記為目標(biāo)類別的樣本。在二分類問題中, 正例(Positive) 代表我們感興趣的目標(biāo),而另一個(gè)類別定義為 反例(Negative) 舉個(gè)栗子??,我們要區(qū)分蘋果??和鳳梨??。我們 想要

    2024年02月04日
    瀏覽(52)
  • 基于 matplotlib 實(shí)現(xiàn)的基本排序算法的動(dòng)態(tài)可視化項(xiàng)目源碼,通過 pyaudio 增加音效,冒泡、選擇、插入、快速等排序

    基于 matplotlib 實(shí)現(xiàn)的基本排序算法的動(dòng)態(tài)可視化項(xiàng)目源碼,通過 pyaudio 增加音效,冒泡、選擇、插入、快速等排序

    依托 matplotlib 實(shí)現(xiàn)的基本排序算法的動(dòng)態(tài)可視化,并通過 pyaudio 增加音效。 安裝 在使用之前請(qǐng)先檢查本地是否存在以下庫: matplotlib pyaudio fire requirements.txt 中包含了上述的庫 使用 目前本項(xiàng)目僅提供了以下排序算法 冒泡排序 選擇排序 插入排序 快排 歸并排序 命令行工具 命

    2024年02月08日
    瀏覽(39)
  • 積跬步至千里 || 矩陣可視化

    積跬步至千里 || 矩陣可視化

    矩陣可以很方面地展示事物兩兩之間的關(guān)系,這種關(guān)系可以通過矩陣可視化的方式進(jìn)行簡(jiǎn)單監(jiān)控。 定義一個(gè)通用類 調(diào)用類 結(jié)果展示 另一種方法

    2024年02月12日
    瀏覽(16)
  • 圖像中部分RGB矩陣可視化

    圖像中部分RGB矩陣可視化

    今天室友有個(gè)需求就是模仿下面這張圖畫個(gè)示意圖: 大致就是把圖像中的一小部分區(qū)域的RGB值可視化了一下。他居然不知道該怎么畫,我尋思這不直接秒了。 其實(shí)就是先畫三個(gè)主圖,一個(gè)全部的,一個(gè)小范圍內(nèi)的,一個(gè)RGB值的表,然后畫四根線就完事了。效果如下: 唯一要

    2024年01月16日
    瀏覽(23)
  • 數(shù)據(jù)可視化 - 動(dòng)態(tài)柱狀圖

    數(shù)據(jù)可視化 - 動(dòng)態(tài)柱狀圖

    通過Bar構(gòu)建基礎(chǔ)柱狀圖 1. 通過Bar()構(gòu)建一個(gè)柱狀圖對(duì)象 2. 和折線圖一樣,通過add_xaxis()和add_yaxis()添加x和y軸數(shù)據(jù) 3. 通過柱狀圖對(duì)象的:reversal_axis(),反轉(zhuǎn)x和y軸 4. 通過label_opts=LabelOpts(position=\\\"right\\\")設(shè)置數(shù)值標(biāo)簽在右側(cè)顯示 Timeline()-時(shí)間線 柱狀圖描述的是分類數(shù)據(jù),回答的是

    2024年02月15日
    瀏覽(20)
  • 基于Python的疫情數(shù)據(jù)可視化(matplotlib,pyecharts動(dòng)態(tài)地圖,大屏可視化)

    基于Python的疫情數(shù)據(jù)可視化(matplotlib,pyecharts動(dòng)態(tài)地圖,大屏可視化)

    有任何學(xué)習(xí)問題可以加我微信交流哦!bmt1014 1、項(xiàng)目需求分析 1.1背景 2020年,新冠肺炎疫情在全球范圍內(nèi)爆發(fā),給人們的健康和生命帶來了嚴(yán)重威脅,不同國家和地區(qū)的疫情形勢(shì)也引起了廣泛的關(guān)注。疫情數(shù)據(jù)的監(jiān)測(cè)和分析對(duì)疫情防控和科學(xué)防治至關(guān)重要。本報(bào)告以疫情數(shù)據(jù)

    2024年02月05日
    瀏覽(41)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包