国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

機(jī)器學(xué)習(xí)之MATLAB代碼--IWOA_BILSTM(基于改進(jìn)鯨魚(yú)算法優(yōu)化的BiLSTM預(yù)測(cè)算法)(十六)

這篇具有很好參考價(jià)值的文章主要介紹了機(jī)器學(xué)習(xí)之MATLAB代碼--IWOA_BILSTM(基于改進(jìn)鯨魚(yú)算法優(yōu)化的BiLSTM預(yù)測(cè)算法)(十六)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

代碼

1、

%% 基于改進(jìn)鯨魚(yú)算法優(yōu)化的BiLSTM預(yù)測(cè)算法 
clear;close all; 
clc
rng('default')
%% 讀取負(fù)荷數(shù)據(jù)
load('QLD1.mat')
data = QLD1(1:2000);
%序列的前 90% 用于訓(xùn)練,后 10% 用于測(cè)試
numTimeStepsTrain = floor(0.9*numel(data));
dataTrain = data(1:numTimeStepsTrain+1)';
dataTest = data(numTimeStepsTrain+1:end)';
%數(shù)據(jù)預(yù)處理,將訓(xùn)練數(shù)據(jù)標(biāo)準(zhǔn)化為具有零均值和單位方差。
mu = mean(dataTrain);
sig = std(dataTrain);
dataTrainStandardized = (dataTrain - mu) / sig;
%輸入BiLSTM的時(shí)間序列交替一個(gè)時(shí)間步
XTrain = dataTrainStandardized(1:end-1);
YTrain = dataTrainStandardized(2:end);
%數(shù)據(jù)預(yù)處理,將測(cè)試數(shù)據(jù)標(biāo)準(zhǔn)化為具有零均值和單位方差。
mu = mean(dataTest);
sig = std(dataTest);

dataTestStandardized = (dataTest - mu) / sig;
XTest = dataTestStandardized(1:end-1);
YTest = dataTestStandardized(2:end);
%%
%創(chuàng)建BiLSTM回歸網(wǎng)絡(luò),指定BiLSTM層的隱含單元個(gè)數(shù)96*3
%序列預(yù)測(cè),因此,輸入一維,輸出一維
numFeatures = 1;
numResponses = 1;
%% 定義改進(jìn)鯨魚(yú)算法優(yōu)化參數(shù)
pop=5; %種群數(shù)量
Max_iteration=10; %  設(shè)定最大迭代次數(shù)
dim = 4;%維度,即BiLSTM網(wǎng)路包含的隱藏單元數(shù)目,最大訓(xùn)練周期,初始學(xué)習(xí)率,L2參數(shù)
lb = [20,50,10E-5,10E-6];%下邊界
ub = [200,300,0.1,0.1];%上邊界
fobj = @(x) fun(x,numFeatures,numResponses,XTrain,YTrain,XTest,YTest);

[Best_score,Best_pos,IWOA_curve,netIWOA,pos_curve]=IWOA(pop,Max_iteration,lb,ub,dim,fobj); %開(kāi)始優(yōu)化

figure
plot(IWOA_curve,'linewidth',1.5);
grid on
xlabel('迭代次數(shù)')
ylabel('適應(yīng)度函數(shù)')
title('IWOA-BiLSTM適應(yīng)度值曲線')

figure
subplot(221)
plot(pos_curve(:,1),'linewidth',1.5);
grid on
xlabel('迭代次數(shù)')
ylabel('隱藏單元數(shù)目')
title('隱藏單元數(shù)目迭代曲線')
subplot(222)
plot(pos_curve(:,2),'linewidth',1.5);
grid on
xlabel('迭代次數(shù)')
ylabel('訓(xùn)練周期')
title('訓(xùn)練周期迭代曲線')
subplot(223)
plot(pos_curve(:,3),'linewidth',1.5);
grid on
xlabel('迭代次數(shù)')
ylabel('學(xué)習(xí)率')
title('學(xué)習(xí)率迭代曲線')
subplot(224)
plot(pos_curve(:,4),'linewidth',1.5);
grid on
xlabel('迭代次數(shù)')
ylabel('L2參數(shù)')
title('L2參數(shù)迭代曲線')
%訓(xùn)練集測(cè)試
PredictTrainIWOA = predict(netIWOA,XTrain, 'ExecutionEnvironment','gpu');
%測(cè)試集測(cè)試
PredictTestIWOA = predict(netIWOA,XTest, 'ExecutionEnvironment','gpu');
%訓(xùn)練集mse
mseTrainIWOA= mse(YTrain-PredictTrainIWOA);
%測(cè)試集mse
mseTestIWOA = mse(YTest-PredictTestIWOA);
%% IWOA-BiLSTM優(yōu)化參數(shù)
numHiddenUnits = round(Best_pos(1));%BiLSTM網(wǎng)路包含的隱藏單元數(shù)目
maxEpochs = round(Best_pos(2));%最大訓(xùn)練周期
InitialLearnRate = Best_pos(3);%初始學(xué)習(xí)率
L2Regularization = Best_pos(4);%L2參數(shù)
%設(shè)置網(wǎng)絡(luò)
layers = [ ...
    sequenceInputLayer(numFeatures)
    bilstmLayer(numHiddenUnits)
    fullyConnectedLayer(numResponses)
    regressionLayer];
%指定訓(xùn)練選項(xiàng)
options = trainingOptions('adam', ...
    'MaxEpochs',maxEpochs, ...
    'ExecutionEnvironment' ,'gpu',...
    'InitialLearnRate',InitialLearnRate,...
    'GradientThreshold',1, ...
    'L2Regularization',L2Regularization, ...
    'Plots','training-progress',...
    'Verbose',0);
%訓(xùn)練BiLSTM
[net,info] = trainNetwork(XTrain,YTrain,layers,options);
%% 訓(xùn)練過(guò)程識(shí)別準(zhǔn)確度曲線
figure;
plot(info.TrainingRMSE,'Color',[0 0.5 1] );
ylabel('TrainingRMSE')
xlabel('Training Step');
title(['訓(xùn)練集均方根值']);
%% 訓(xùn)練過(guò)程損失值曲線
figure;
plot(info.TrainingLoss,'Color',[1 0.5 0] );
ylabel('Training Loss')
xlabel('Training Step');
title(['損失函數(shù)值' ]);
%% 基礎(chǔ)BiLSTM測(cè)試
numHiddenUnits = 50;
layers = [ ...
    sequenceInputLayer(numFeatures)
    bilstmLayer(numHiddenUnits)
    fullyConnectedLayer(numResponses)
    regressionLayer];
%指定訓(xùn)練選項(xiàng)
options = trainingOptions('adam', ...
    'MaxEpochs',50, ...
    'ExecutionEnvironment' ,'gpu',...
    'GradientThreshold',1, ...
    'InitialLearnRate',0.001, ...
    'L2Regularization',0.0001,...
    'Plots','training-progress',...
    'Verbose',1);
%訓(xùn)練BiLSTM
net = trainNetwork(XTrain,YTrain,layers,options);
%訓(xùn)練集測(cè)試
PredictTrain = predict(net,XTrain, 'ExecutionEnvironment','gpu');
%測(cè)試集測(cè)試
PredictTest = predict(net,XTest, 'ExecutionEnvironment','gpu');
%訓(xùn)練集mse
mseTrain = mse(YTrain-PredictTrain);
%測(cè)試集mse
mseTest = mse(YTest-PredictTest);

disp('-------------------------------------------------------------')
disp('IWOA-BiLSTM優(yōu)化得到的最優(yōu)參數(shù)為:')
disp(['IWOA-BiLSTM優(yōu)化得到的隱藏單元數(shù)目為:',num2str(round(Best_pos(1)))]);
disp(['IWOA-BiLSTM優(yōu)化得到的最大訓(xùn)練周期為:',num2str(round(Best_pos(2)))]);
disp(['IWOA-BiLSTM優(yōu)化得到的InitialLearnRate為:',num2str((Best_pos(3)))]);
disp(['IWOA-BiLSTM優(yōu)化得到的L2Regularization為:',num2str((Best_pos(4)))]);
disp('-------------------------------------------------------------')
disp('IWOA-BiLSTM結(jié)果:')
disp(['IWOA-BiLSTM訓(xùn)練集MSE:',num2str(mseTrainIWOA)]);
disp(['IWOA-BiLSTM測(cè)試集MSE:',num2str(mseTestIWOA)]);
disp('BiLSTM結(jié)果:')
disp(['BiLSTM訓(xùn)練集MSE:',num2str(mseTrain)]);
disp(['BiLSTM測(cè)試集MSE:',num2str(mseTest)]);

%% 訓(xùn)練集結(jié)果繪圖
errors=YTrain-PredictTrain;
errorsIWOA=YTrain-PredictTrainIWOA;

MSE=mean(errors.^2);
RMSE=sqrt(MSE);
MSEIWOA=mean(errorsIWOA.^2);
RMSEIWOA=sqrt(MSEIWOA);

error_mean=mean(errors);
error_std=std(errors);

error_meanIWOA=mean(errorsIWOA);
error_stdIWOA=std(errorsIWOA);

figure;
plot(YTrain,'k');
hold on;
plot(PredictTrain,'b');
plot(PredictTrainIWOA,'r');
legend('Target','BiLSTM','IWOA-BiLSTM');
title('訓(xùn)練集結(jié)果');
xlabel('Sample Index');
grid on;

figure;
plot(errors);
hold on
plot(errorsIWOA);
legend('BiLSTM-Error','IWOA-BiLSTM-Eoor');
title({'訓(xùn)練集預(yù)測(cè)誤差對(duì)比';['MSE = ' num2str(MSE), ', IWOA-MSE = ' num2str(MSEIWOA)]});
grid on;

figure;
histfit(errorsIWOA, 50);
title(['Error Mean = ' num2str(error_mean), ', Error St.D. = ' num2str(error_std)]);
%% 測(cè)試集結(jié)果繪圖
errors=YTest-PredictTest;
errorsIWOA=YTest-PredictTestIWOA;

MSE=mean(errors.^2);
RMSE=sqrt(MSE);
MSEIWOA=mean(errorsIWOA.^2);
RMSEIWOA=sqrt(MSEIWOA);

error_mean=mean(errors);
error_std=std(errors);

error_meanIWOA=mean(errorsIWOA);
error_stdIWOA=std(errorsIWOA);

figure;
plot(YTest,'k');
hold on;
plot(PredictTest,'b');
plot(PredictTestIWOA,'r');
legend('Target','BiLSTM','IWOA-BiLSTM');
title('測(cè)試集結(jié)果');
xlabel('Sample Index');
grid on;

figure;
plot(errors);
hold on
plot(errorsIWOA);
legend('BiLSTM-Error','IWOA-BiLSTM-Eoor');
title({'測(cè)試集預(yù)測(cè)誤差對(duì)比';['MSE = ' num2str(MSE), ', IWOA-MSE = ' num2str(MSEIWOA)]});
grid on;

figure;
histfit(errorsIWOA, 50);
title(['Error Mean = ' num2str(error_mean) ', Error St.D. = ' num2str(error_std)]);

2、

%% [1]武澤權(quán),牟永敏.一種改進(jìn)的鯨魚(yú)優(yōu)化算法[J].計(jì)算機(jī)應(yīng)用研究,2020,37(12):3618-3621.
function [Leader_score,Leader_pos,Convergence_curve,BestNet,pos_curve]=IWOA(SearchAgents_no,Max_iter,lb,ub,dim,fobj)
% initialize position vector and score for the leader
net = {};
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems
%% 改進(jìn)點(diǎn):準(zhǔn)反向初始化
Positions=initializationNew(SearchAgents_no,dim,ub,lb,fobj);
Convergence_curve=zeros(1,Max_iter);
t=0;% Loop counter
% Main loop
while t<Max_iter
    for i=1:size(Positions,1)
        % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        % Calculate objective function for each search agent
        
%         fitness=fobj(Positions(i,:));
        [fitness,net] =  fobj(Positions(i,:));
        % Update the leader
        if fitness<Leader_score % Change this to > for maximization problem
            Leader_score=fitness; % Update alpha
            Leader_pos=Positions(i,:);
        end
    end
    BestNet = net;
    %% 改進(jìn)點(diǎn):非線性收斂因子
    a=2 - sin(t*pi/(2*Max_iter) + 0);
    % a2 linearly dicreases from -1 to -2 to calculate t in Eq. (3.12)
    a2=-1+t*((-1)/Max_iter);
    %% 改進(jìn)點(diǎn):自適應(yīng)權(quán)重
    w = 1 - (exp(t/Max_iter) - 1)/(exp(1) -1);
    % Update the Position of search agents
    for i=1:size(Positions,1)
        r1=rand(); % r1 is a random number in [0,1]
        r2=rand(); % r2 is a random number in [0,1]
        
        A=2*a*r1-a;  % Eq. (2.3) in the paper
        C=2*r2;      % Eq. (2.4) in the paper
        
        b=1;               %  parameters in Eq. (2.5)
        l=(a2-1)*rand+1;   %  parameters in Eq. (2.5)
        
        p = rand();        % p in Eq. (2.6)
        for j=1:size(Positions,2)
            if p<0.5
                if abs(A)>=1
                    rand_leader_index = floor(SearchAgents_no*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % Eq. (2.7)
                    Positions(i,j)=w*X_rand(j)-A*D_X_rand;      % 引入權(quán)重
                elseif abs(A)<1
                    D_Leader=abs(C*Leader_pos(j)-Positions(i,j)); % Eq. (2.1)
                    Positions(i,j)=w*Leader_pos(j)-A*D_Leader;       % 引入權(quán)重
                end
            elseif p>=0.5
                distance2Leader=abs(Leader_pos(j)-Positions(i,j));
                % Eq. (2.5)
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+w*Leader_pos(j);  % 引入權(quán)重
            end
        end
        %邊界處理
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        %% 改進(jìn)點(diǎn):隨機(jī)差分變異
        Rindex = randi(SearchAgents_no);%隨機(jī)選擇一個(gè)個(gè)體
        r1 = rand; r2 = rand;
        Temp = r1.*(Leader_pos - Positions(i,:)) + r2.*(Positions(Rindex,:) -  Positions(i,:));
        Flag4ub=Temp>ub;
        Flag4lb=Temp<lb;
        Temp=(Temp.*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        if fobj(Temp) < fobj(Positions(i,:))
            Positions(i,:) = Temp;
        end
    end
    t=t+1;
    Convergence_curve(t)=Leader_score;
    pos_curve(t,:)=Leader_pos;
    fprintf(1,'%g\n',t);
end



3、

%% 基于準(zhǔn)反向策略的種群初始化
function Positions=initializationNew(SearchAgents_no,dim,ub,lb,fun)
Boundary_no= size(ub,2); % numnber of boundaries
BackPositions = zeros(SearchAgents_no,dim);
if Boundary_no==1
    PositionsF=rand(SearchAgents_no,dim).*(ub-lb)+lb;
    %求取反向種群
    BackPositions = ub + lb - PositionsF;
end

% If each variable has a different lb and ub
if Boundary_no>1
    for i=1:dim
        ub_i=ub(i);
        lb_i=lb(i);
        PositionsF(:,i)=rand(SearchAgents_no,1).*(ub_i-lb_i)+lb_i;
        %求取反向種群
        BackPositions(:,i) =  (ub_i+lb_i) - PositionsF(:,i);
    end
end
%% 準(zhǔn)反向操作
for i = 1:SearchAgents_no
    for j = 1:dim
        if Boundary_no==1
            if (ub + lb)/2 <BackPositions(i,j)
                Lb = (ub + lb)/2;
                Ub = BackPositions(i,j);
                PBackPositions(i,j) = (Ub - Lb)*rand + Lb;
            else
                Lb = BackPositions(i,j);
                Ub =  (ub + lb)/2;
                PBackPositions(i,j) = (Ub - Lb)*rand + Lb;
            end
        else
            if (ub(j) + lb(j))/2 <BackPositions(i,j)
                Lb = (ub(j) + lb(j))/2;
                Ub = BackPositions(i,j);
                PBackPositions(i,j) = (Ub - Lb)*rand + Lb;
            else
                Lb = BackPositions(i,j);
                Ub = (ub(j) + lb(j))/2;
                PBackPositions(i,j) = (Ub - Lb)*rand + Lb;
            end
        end
    end
end
%合并種群
AllPositionsTemp = [PositionsF;PBackPositions];
AllPositions = AllPositionsTemp;
for i = 1:size(AllPositionsTemp,1)
    %    fitness(i) = fun(AllPositionsTemp(i,:));
    [fitness(i),net{i}] =  fun(AllPositionsTemp(i,:));
    fprintf(1,'%g\n',i);
end
[fitness, index]= sort(fitness);%排序
for i = 1:2*SearchAgents_no
    AllPositions(i,:) = AllPositionsTemp(index(i),:);
end
%取適應(yīng)度排名靠前的作為種群的初始化
Positions = AllPositions(1:SearchAgents_no,:);

end





4、

%適應(yīng)度函數(shù)
%mse作為適應(yīng)度值
function [fitness,net] = fun(x,numFeatures,numResponses,XTrain,YTrain,XTest,YTest)

disp('進(jìn)行一次訓(xùn)練中....')
%% 獲取優(yōu)化參數(shù)
numHiddenUnits = round(x(1));%BiLSTM網(wǎng)路包含的隱藏單元數(shù)目
maxEpochs = round(x(2));%最大訓(xùn)練周期
InitialLearnRate = x(3);%初始學(xué)習(xí)率
L2Regularization = x(4);%L2參數(shù)

%設(shè)置網(wǎng)絡(luò)
layers = [ ...
    sequenceInputLayer(numFeatures)
    bilstmLayer(numHiddenUnits)
    fullyConnectedLayer(numResponses)
    regressionLayer];

%指定訓(xùn)練選項(xiàng),采用cpu訓(xùn)練, 這里用cpu是為了保證能直接運(yùn)行,如果需要gpu訓(xùn)練,改成gpu就行了,且保證cuda有安裝
options = trainingOptions('adam', ...
    'MaxEpochs',maxEpochs, ...
    'ExecutionEnvironment' ,'gpu',...
    'InitialLearnRate',InitialLearnRate,...
    'GradientThreshold',1, ...
    'L2Regularization',L2Regularization, ...
    'Verbose',0);
%'Plots','training-progress'
%訓(xùn)練LSTM
net = trainNetwork(XTrain,YTrain,layers,options);
%訓(xùn)練集測(cè)試
PredictTrain = predict(net,XTrain, 'ExecutionEnvironment','gpu');
%測(cè)試集測(cè)試
PredictTest = predict(net,XTest, 'ExecutionEnvironment','gpu');

%訓(xùn)練集mse
mseTrain = mse(YTrain-PredictTrain);
%測(cè)試集mse
mseTest = mse(YTest-PredictTest);

%% 測(cè)試集準(zhǔn)確率

fitness =mseTrain+mseTest;
disp('訓(xùn)練結(jié)束....')
end

數(shù)據(jù)

bilstm代碼,matlab,算法

結(jié)果

bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法
bilstm代碼,matlab,算法文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-576934.html

到了這里,關(guān)于機(jī)器學(xué)習(xí)之MATLAB代碼--IWOA_BILSTM(基于改進(jìn)鯨魚(yú)算法優(yōu)化的BiLSTM預(yù)測(cè)算法)(十六)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包