国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

算法介紹及實現(xiàn)——基于遺傳算法改進的BP神經(jīng)網(wǎng)絡(luò)算法(附完整Python實現(xiàn))

這篇具有很好參考價值的文章主要介紹了算法介紹及實現(xiàn)——基于遺傳算法改進的BP神經(jīng)網(wǎng)絡(luò)算法(附完整Python實現(xiàn))。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。


目錄

一、算法介紹

1.1 遺傳算法

1.2 為什么要使用遺傳算法進行改進

二、算法原理

三、算法實現(xiàn)

3.1 算子選擇

3.2 代碼實現(xiàn)


一、算法介紹

1.1 遺傳算法

????????遺傳算法是受啟發(fā)于自然界中生物對于自然環(huán)境 “適者生存”的強大自適應(yīng)能力,通過對生物演化過程模擬和抽象,構(gòu)建了以自然界生物演變進化為邏輯基礎(chǔ)的遺傳算法。遺傳算法包括了自然界生物在演變過程中的主要步驟,即選擇、(基因)變異和(基因)交叉,對應(yīng)著遺傳算法中的三個運算算子。在具體的優(yōu)化問題下,遺傳算法會產(chǎn)生多個問題的可行解作為種群,然后讓種群進行模擬意義上生物進化中的選擇、變異、交叉等操作。在種群繁衍(迭代)一定次數(shù)之后,通過計算種群的適應(yīng)度,尋找最終種群中的最優(yōu)個體,該個體即代表優(yōu)化問題的近似最優(yōu)解。上述此即為遺傳算法主要思想。其流程圖如下:

基于遺傳算法的bp神經(jīng)網(wǎng)絡(luò)優(yōu)化算法,數(shù)學(xué)建模與算法,神經(jīng)網(wǎng)絡(luò),pytorch

1.2 為什么要使用遺傳算法進行改進

? ? ? ? BP算法原理不多贅述,可見我之前博文BP原理介紹,在BP訓(xùn)練過程中,很容易出現(xiàn)陷入局部最小值的情況,所以引入遺傳算法進行優(yōu)化。遺傳作為一種模擬生物進化的全局尋優(yōu)算法,有著優(yōu)秀的全局尋優(yōu)能力,能夠以一個種群為基礎(chǔ)不斷的迭代進化,最后獲得問題的最優(yōu)解或近似最優(yōu)解。BP算法和遺傳算法都是人們廣泛使用的算法,而且兩算法具有明顯的優(yōu)勢互補,故而很多研究者都在探索兩個算法的融合方法,以期能提高算法性能、提升算法精度。

二、算法原理

????????基于遺傳算法改進的BP神經(jīng)網(wǎng)絡(luò)算法(GA-BP算法)的主要思想即為:通過遺傳算法的全局尋優(yōu)能力獲得最優(yōu)的BP網(wǎng)絡(luò)的初始權(quán)值和閾值,將尋優(yōu)算法獲得的最優(yōu)初始權(quán)值和閾值作為BP神經(jīng)網(wǎng)絡(luò)的初始權(quán)值和閾值,然后進行訓(xùn)練以避免陷入局部最小值。遺傳算法改進后的BP神經(jīng)網(wǎng)絡(luò)權(quán)值不是隨機產(chǎn)生的,而是遺傳算法尋優(yōu)模塊獲得的。BP算法中的初始權(quán)值和閾值作為遺傳算法個體的基因值,個體長度即為BP神經(jīng)網(wǎng)絡(luò)中權(quán)值和閾值的個數(shù),每個基因即代表一個權(quán)值或閾值,基因上的數(shù)值就是BP神經(jīng)網(wǎng)絡(luò)中連接權(quán)值或閾值的真實值,如此便組成了遺傳算法中的一個染色體。一定數(shù)量的染色體作為遺傳算法訓(xùn)練的初始種群,再經(jīng)過遺傳算法的選擇運算、交叉運算、變異運算等迭代過程后獲得一個最優(yōu)個體,然后以最優(yōu)個體作為BP網(wǎng)絡(luò)的初始參數(shù)進行訓(xùn)練,此即為GA-BP算法的原理。流程圖如下:

基于遺傳算法的bp神經(jīng)網(wǎng)絡(luò)優(yōu)化算法,數(shù)學(xué)建模與算法,神經(jīng)網(wǎng)絡(luò),pytorch

三、算法實現(xiàn)

3.1 算子選擇

基于遺傳算法的bp神經(jīng)網(wǎng)絡(luò)優(yōu)化算法,數(shù)學(xué)建模與算法,神經(jīng)網(wǎng)絡(luò),pytorch

?????????對于(e)所述的組織方法,是當(dāng)影響因子數(shù)據(jù)和目標(biāo)數(shù)據(jù)沒有很強的相關(guān)性的情況下,用前一時序區(qū)間的數(shù)據(jù)作為該時序數(shù)據(jù)的影響因子來進行訓(xùn)練。

3.2 代碼實現(xiàn)

? ? ? ? ?實例為基于一段時序監(jiān)測數(shù)據(jù)的滑坡位移預(yù)測,監(jiān)測影響因子數(shù)據(jù)有:溫度、降雨、風(fēng)力、灌溉等,監(jiān)測的目標(biāo)數(shù)據(jù)是坡體的裂縫寬度數(shù)據(jù)。實驗表明影響因子數(shù)據(jù)和目標(biāo)數(shù)據(jù)不具有強相關(guān)性,所以選擇用目標(biāo)數(shù)據(jù)本身作為影響因子數(shù)據(jù)。

? ? ? ? 將整個算法分成如下模塊:

chrom_code  # 基因編碼模塊
chrom_mutate  # 變異算子模塊
chrom_cross  # 交叉算子模塊
chrom_select  # 選擇算子模塊
chrom_fitness  # 染色體適應(yīng)度計算模塊
data_prepare  # 數(shù)據(jù)準(zhǔn)備模塊
BP_network  # BPNN模塊
chrom_test  # 染色體檢測模塊

new_GA-BP   # 改進算法主程序

chrom_test.py? 檢測生成的染色體基因有沒有超限。

# 染色體檢查
# 檢查染色體中有沒有超出基因范圍的基因

def test(code_list,bound):
    """
    :param code_list: code_list: 染色體個體
    :param bound: 各基因的取值范圍
    :return: bool
    """
    for i in range(len(code_list)):
        if code_list[i] < bound[i][0] or code_list[i] > bound[i][1]:
            return False
        else:
            return True

?chrom_code.py? 基因編碼。

# 基因編碼模塊

import random
import numpy as np
import chrom_test

def code(chrom_len,bound):
    """
    :param chrom_len: 染色體的長度,為一個數(shù),采用實數(shù)編碼即為基因的個數(shù)
    :param bound: 取值范圍,為一個二維數(shù)組,每個基因允許的取值范圍
    :return: 對應(yīng)長度的編碼
    """
    code_list = []
    count = 0
    while True:
        pick = random.uniform(0,1)
        if pick == 0:
            continue
        else:
            pick = round(pick,3)
            temp = bound[count][0] + (bound[count][1] - bound[count][0])*pick
            temp = round(temp,3)
            code_list.append(temp)
            count = count + 1
        if count == chrom_len:
            if chrom_test.test(code_list,bound):
                break
            else:
                count = 0
    return code_list

BP_network.py?? ?完成網(wǎng)絡(luò)結(jié)構(gòu)的構(gòu)建。

# BP模塊 借助PyTorch實現(xiàn)

import torch

# 引入了遺傳算法參數(shù)的BP模型
class BP_net(torch.nn.Module):

    def __init__(self, n_feature, n_hidden, n_output, GA_parameter):
        super(BP_net, self).__init__()
        # 構(gòu)造隱含層和輸出層
        self.hidden = torch.nn.Linear(n_feature, n_hidden)
        self.output = torch.nn.Linear(n_hidden, n_output)
        # 給定網(wǎng)絡(luò)訓(xùn)練的初始權(quán)值和偏執(zhí)等
        self.hidden.weight = torch.nn.Parameter(GA_parameter[0])
        self.hidden.bias = torch.nn.Parameter(GA_parameter[1])
        self.output.weight = torch.nn.Parameter(GA_parameter[2])
        self.output.bias = torch.nn.Parameter(GA_parameter[3])

    def forward(self, x):
        # 前向計算
        hid = torch.tanh(self.hidden(x))
        out = torch.tanh(self.output(hid))
        return out

# 傳統(tǒng)的BP模型
class ini_BP_net(torch.nn.Module):

    def __init__(self, n_feature, n_hidden, n_output):
        super(ini_BP_net, self).__init__()
        # 構(gòu)造隱含層和輸出層
        self.hidden = torch.nn.Linear(n_feature, n_hidden)
        self.output = torch.nn.Linear(n_hidden, n_output)

    def forward(self, x):
        # 前向計算
        hid = torch.tanh(self.hidden(x))
        out = torch.tanh(self.output(hid))
        return out

def train(model, epochs, learning_rate, x_train, y_train):
    """
    :param model: 模型
    :param epochs: 最大迭代次數(shù)
    :param learning_rate:學(xué)習(xí)率
    :param x_train:訓(xùn)練數(shù)據(jù)(輸入)
    :param y_train:訓(xùn)練數(shù)據(jù)(輸出)
    :return: 最終的loss值(MSE)
    """
    # path = "log.txt"
    # f = open(path, 'w',encoding='UTF-8')
    # f.write("train log\n------Train Action------\n"
    #         "Time:{}\n".format(time.ctime()))
    loss_fc = torch.nn.MSELoss(reduction="sum")
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    loss_list = []
    for i in range(epochs):
        model.train()
        # 前向計算
        data = model(x_train)
        # 計算誤差
        loss = loss_fc(data, y_train)
        loss_list.append(loss)
        # 更新梯度
        optimizer.zero_grad()
        # 方向傳播
        loss.backward()
        # 更新參數(shù)
        optimizer.step()
        # print("This is {} th iteration,MSE is {}。".format(i+1,loss))
    loss_ls = [loss_list[i].detach().numpy() for i in range(len(loss_list))]
    return loss_ls

?chrom_fitness.py? ?適應(yīng)度計算

# 適應(yīng)度計算模塊
# 功能;傳入一個編碼,返回一個適應(yīng)度值
from torchvision.transforms import transforms
import torch
import BP_network
import numpy as np

# 最小二乘思想獲得兩組數(shù)據(jù)的誤差
def zxec_PC(X, Y):
    X = np.array(X, dtype=np.float).flatten()
    Y = np.array(Y, dtype=np.float).flatten()
    if len(X) != len(Y):
        print("Wrong!")
    n = len(X)
    Wc = 0
    for i in range(n):
        Wc = Wc + (X[i] - Y[i]) * (X[i] - Y[i])
    return Wc

def calculate_fitness(code,n_feature,n_hidden,n_output,epochs
                      ,learning_rate,x_train,y_train):
    """
    :param code: 染色體編碼
    :param n_feature: 輸入層個數(shù)
    :param n_hidden: 隱含層個數(shù)
    :param n_output: 輸出層個數(shù)
    :param epochs: 最多迭代次數(shù)
    :param learning_rate: 學(xué)習(xí)率
    :param x_train: 訓(xùn)練(輸入)數(shù)據(jù)
    :param y_train: 訓(xùn)練(輸出)數(shù)據(jù)
    :return: fitness 適應(yīng)度值
    """
    Parameter = code[:]
    # 參數(shù)提取
    hidden_weight = Parameter[0:n_feature * n_hidden]
    hidden_bias = Parameter[n_feature * n_hidden:
                  n_feature * n_hidden + n_hidden]
    output_weight = Parameter[n_feature * n_hidden + n_hidden:
                  n_feature * n_hidden + n_hidden + n_hidden * n_output]
    output_bias = Parameter[n_feature * n_hidden + n_hidden + n_hidden * n_output:
                  n_feature * n_hidden + n_hidden + n_hidden * n_output + n_output]

    # 類型轉(zhuǎn)換
    tensor_tran = transforms.ToTensor()
    hidden_weight = tensor_tran(np.array(hidden_weight).reshape((n_hidden, n_feature))).to(torch.float32)
    hidden_bias = tensor_tran(np.array(hidden_bias).reshape((1, n_hidden))).to(torch.float32)
    output_weight = tensor_tran(np.array(output_weight).reshape((n_output,n_hidden))).to(torch.float32)
    output_bias = tensor_tran(np.array(output_bias).reshape((1, n_output))).to(torch.float32)
    # 形裝轉(zhuǎn)換
    hidden_weight = hidden_weight.reshape((n_hidden,n_feature))
    hidden_bias = hidden_bias.reshape(n_hidden)
    output_weight = output_weight.reshape((n_output,n_hidden))
    output_bias = output_bias.reshape(n_output)
    # 帶入模型計算
    GA = [hidden_weight, hidden_bias, output_weight, output_bias]
    BP_model = BP_network.BP_net(n_feature,n_hidden,n_output,GA)
    loss = BP_network.train(BP_model,epochs,learning_rate,x_train,y_train)
    # 計算適應(yīng)度
    prediction = BP_model(x_train)
    fitness = 10 - zxec_PC(prediction.detach().numpy(),y_train.detach().numpy())
    return round(fitness,4)

?chrom_mutate.py? 選擇算子

# 變異算子
import random

def mutate(chrom_sum, size, p_mutate, chrom_len, bound, maxgen, nowgen):
    """
    :param chrom_sum: 染色體群,即種群,里面為一定數(shù)量的染色體  類型為一個二維列表
    :param size: 種群規(guī)模,即染色體群里面有多少個染色體  為一個數(shù)
    :param p_mutate: 交叉概率 為一個浮點數(shù)
    :param chrom_len: 種群長度,即一條染色體的長度,即基因的個數(shù) 為一個數(shù)
    :param bound: 各基因的取值范圍
    :param maxgen:  最大迭代次數(shù)
    :param nowgen: 當(dāng)前迭代次數(shù)
    :return: 變異算子后的種群
    """
    count = 0
    # print("\n---這是第{}次遺傳迭代...".format(nowgen))
    while True:
        # 隨機選擇變異染色體
        # print("{}-{}".format(nowgen,count+1))
        seek = random.uniform(0,1)
        while seek == 1:
            seek = random.uniform(0,1)
        index = int(seek * size)
        # print("可能變異的染色體號數(shù)為:",index)
        # 判斷是否變異
        flag = random.uniform(0,1)
        if p_mutate >= flag:
            # 選擇變異位置
            # print("發(fā)生變異中...")
            seek1 = random.uniform(0,1)
            while seek1 == 1:
                seek1 = random.uniform(0,1)
            pos = int(seek1 * chrom_len)
            # print("變異的基因號數(shù)為:",pos)
            # 開始變異
            seek3 = random.uniform(0,1)
            fg = pow(seek3*(1-nowgen/maxgen),2) # 約到迭代后期,其至越接近0,變異波動就越小
            # print("變異前基因為:",chrom_sum[index][pos])
            if seek3 > 0.5:
                chrom_sum[index][pos] = round(chrom_sum[index][pos] +
                                              (bound[pos][1] - chrom_sum[index][pos])*fg,3)
            else:
                chrom_sum[index][pos] = round(chrom_sum[index][pos] -
                                              (chrom_sum[index][pos] - bound[pos][0])*fg,3)
            # print("變異后基因為:", chrom_sum[index][pos])
            count = count + 1
        else:
            # print("未發(fā)生變異。")
            count = count + 1
        if count == size:
            break
    return chrom_sum

chrom_cross.py? 交叉算子

# 交叉算子
import random
import chrom_test

def cross(chrom_sum, size, p_cross, chrom_len, bound):
    """
    :param chrom_sum:種群集合,為二維列表
    :param size:種群總數(shù),即染色體的個數(shù)
    :param p_cross:交叉概率
    :param chrom_len:染色提長度,每個染色體含基因數(shù)
    :param bound:每個基因的范圍
    :return: 交叉后的種群集合
    """
    count = 0
    while True:
        # 第一步 先選擇要交叉的染色體
        seek1 = random.uniform(0,1)
        seek2 = random.uniform(0,1)
        while seek1 == 0 or seek2 == 0 or seek1 == 1 or seek2 == 1:
            seek1 = random.uniform(0, 1)
            seek2 = random.uniform(0, 1)
        # index_1(2)為選中交叉的個體在種群中的索引
        index_1 = int(seek1 * size)
        index_2 = int(seek2 * size)
        if index_1 == index_2:
            if index_2 == size - 1:
                index_2 = index_2 - 1
            else:
                index_2 = index_2 + 1
        # print("可能交叉的兩個染色體為:",index_1,index_2)
        # 第二步 判斷是否進行交叉
        flag = random.uniform(0,1)
        while flag == 0:
            flag = random.uniform(0,1)
        if p_cross >= flag:
            # 第三步 開始交叉
            # print("開始交叉...")
            p_pos = random.uniform(0, 1)
            while p_pos == 0 or p_pos == 1:
                p_pos = random.uniform(0, 1)
            pos = int(p_pos * chrom_len)
            # print("交叉的極影位置為:",pos)
            var1 = chrom_sum[index_1][pos]
            var2 = chrom_sum[index_2][pos]
            pick = random.uniform(0,1)
            # print("交叉前染色體為:")
            # print(chrom_sum[index_1])
            # print(chrom_sum[index_2])
            chrom_sum[index_1][pos] = round((1-pick) * var1 + pick * var2,3)
            chrom_sum[index_2][pos] = round(pick * var1 + (1-pick) * var2,3)
            # print("交叉后染色體為:")
            # print(chrom_sum[index_1])
            # print(chrom_sum[index_2])
            if chrom_test.test(chrom_sum[index_1],bound) and chrom_test.test(chrom_sum[index_2],bound):
                count = count + 1
            else:
                continue
        else:
            # print("沒有發(fā)生交叉。")
            count = count + 1
        # print("本次循環(huán)結(jié)束\n")
        if count == size:
            break
    return chrom_sum

chrom_select.py? ?選擇算子?

# 選擇算子
import numpy as np
import random

def select(chrom_sum,fitness_ls):
    """
    :param chrom_sum:種群
    :param fitness_ls: 各染色體的適應(yīng)度值
    :return: 更新后的種群
    """
    # print("種群適應(yīng)度分別為:",fitness_ls)
    fitness_ls = np.array(fitness_ls,dtype=np.float64)
    sum_fitness_ls = np.sum(fitness_ls,dtype=np.float64)
    P_inh = []
    M = len(fitness_ls)
    for i in range(M):
        P_inh.append(fitness_ls[i]/sum_fitness_ls)
    # 將概率累加
    for i in range(len(P_inh)-1):
        P_temp = P_inh[i] + P_inh[i+1]
        P_inh[i+1] = round(P_temp, 2)
    P_inh[-1] = 1
    # 輪盤賭算法選擇染色體
    account = []
    for i in range(M):
        rand = random.random()
        for j in range(len(P_inh)):
            if rand <= P_inh[j]:
                account.append(j)
                break
            else:
                continue
    # 根據(jù)索引號跟新種群
    # print("輪盤賭的結(jié)果為:",account)
    new_chrom_sum = []
    for i in account:
        new_chrom_sum.append(chrom_sum[i])
    return new_chrom_sum

data_prepare.py? 數(shù)據(jù)準(zhǔn)備

# 數(shù)據(jù)準(zhǔn)備
import numpy as np
import pandas as pd

def Data_loader():
    # 文件路徑
    ENU_measure_path = "18-10-25至19-3-25三方向位移數(shù)據(jù).xlsx"
    t_path = "天氣數(shù)據(jù).xls"
    M_path = "data.csv"
    # 三方向數(shù)據(jù)
    df_1 = pd.read_excel(ENU_measure_path)
    ENU_df = pd.DataFrame(df_1)
    ENU_E = ENU_df["E/m"]
    ENU_E = np.array(ENU_E)
    ENU_N = ENU_df["N/m"]
    ENU_N = np.array(ENU_N)
    ENU_U = ENU_df["U/m"]
    ENU_U = np.array(ENU_U)
    ENU_R = ENU_df['R/m']
    ENU_R = np.array(ENU_R)
    
    df_2 = pd.read_excel(t_path)
    t_df = pd.DataFrame(df_2)
    # 最大溫度數(shù)據(jù)
    max_tem = t_df["bWendu"]
    max_tem_ls = []
    for i in range(len(max_tem)):
        temp = str(max_tem[i])
        temp = temp.replace("℃","")
        max_tem_ls.append(eval(temp))
    max_tem = np.array(max_tem_ls)
    # 最低溫度數(shù)據(jù)
    min_tem = t_df["yWendu"]
    min_tem_ls = []
    for i in range(len(min_tem)):
        temp = str(min_tem[i])
        temp = temp.replace("℃","")
        min_tem_ls.append(eval(temp))
    min_tem =np.array(min_tem_ls)
    # 天氣數(shù)據(jù)
    tianqi = t_df["Tian_Qi"]
    tianqi = np.array(tianqi)
    # 風(fēng)力數(shù)據(jù)
    Feng = t_df["Feng"]
    Feng = np.array(Feng)
    # 降雨數(shù)據(jù)
    rain = t_df["rainfall"]
    rain = np.array(rain)
    # 灌溉數(shù)據(jù)
    guangai = t_df["guangai"]
    guangai = np.array(guangai)
    # 獲取時間數(shù)據(jù)
    namels = t_df["ymd"]
    name_ls = []
    for i in range(len(namels)):
        temp = str(namels[i])
        temp = temp.replace(" 00:00:00","")
        name_ls.append(str(temp))
    # 讀取另一文件數(shù)據(jù),該數(shù)據(jù)為位移計和GNSS監(jiān)測數(shù)據(jù)
    df_3 = pd.read_csv(M_path)
    M_df = pd.DataFrame(df_3)
    M_data = M_df["Measurerel"]
    R_data = M_df["R"]
    M_data = np.array(M_data)
    R_data = np.array(R_data)

    return [ENU_R, M_data, R_data, ENU_U, ENU_E, ENU_N,max_tem,min_tem,name_ls]

主程序?。。?!

# 改進算法主程序
import sys
import chrom_code  # 基因編碼模塊
import chrom_mutate  # 變異算子模塊
import chrom_cross  # 交叉算子模塊
import chrom_select  # 選擇算子模塊
import chrom_fitness  # 染色體適應(yīng)度計算模塊
import data_prepare  # 數(shù)據(jù)準(zhǔn)備模塊
import BP_network  # BPNN模塊
import torch
import torch.nn.functional as F
from torchvision.transforms import transforms
import numpy as np
import matplotlib.pyplot as plt
import time

plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False

# -----參數(shù)設(shè)置-----
epochs = 300  # 神經(jīng)網(wǎng)絡(luò)最大迭代次數(shù)
learning_rate = 0.01  # 學(xué)習(xí)率
n_feature = 6  # 輸入層個數(shù)
n_hidden = 9  # 隱含層個數(shù)
n_output = 1  # 輸出層個數(shù)

chrom_len = n_feature * n_hidden + n_hidden + n_hidden * n_output + n_output  # 染色體長度
size = 15  # 種群規(guī)模
bound = np.ones((chrom_len, 2))
sz = np.array([[-1, 0], [0, 1]])
bound = np.dot(bound, sz)  # 各基因取值范圍
p_cross = 0.4  # 交叉概率
p_mutate = 0.01  # 變異概率
maxgen = 30  # 遺傳最大迭代次數(shù)

# 數(shù)據(jù)準(zhǔn)備
# ========================================= #
data_set = data_prepare.Data_loader()
displace = data_set[1]
name_ls = data_set[-1]
in_train_data = []
in_test_data = []

# 數(shù)目分配
train_num = 120
test_num = len(displace) - train_num - n_feature

for i in range(len(displace)):
    temp = []
    if i <= train_num-1:  # 用于控制訓(xùn)練數(shù)據(jù)和預(yù)測數(shù)據(jù)的分配
        temp = [round(displace[i + j], 5) for j in range(n_feature)]
        in_train_data.append(temp)
    else:
        temp = [round(displace[i + j], 5) for j in range(n_feature)]
        in_test_data.append(temp)
    if i == len(displace)-n_feature-1:
        break

# 格式轉(zhuǎn)化
in_train_data = np.array(in_train_data)
in_test_data = np.array(in_test_data)
# 數(shù)據(jù)分割,用于建模和預(yù)測
out_train_data = displace[n_feature:train_num+n_feature]
out_test_data = displace[train_num+n_feature:len(displace)]

# 測試輸出
# print(in_train_data)
# print(out_train_data)
# print(in_test_data)
# print(out_test_data)
# print(train_num)
# print(test_num)

# 數(shù)據(jù)格式轉(zhuǎn)換及數(shù)據(jù)歸一化
tensor_tran = transforms.ToTensor()
# 訓(xùn)練過程中的輸入層數(shù)據(jù)
in_train_data = tensor_tran(in_train_data).to(torch.float)
in_train_data = F.normalize(in_train_data)
in_train_data = in_train_data.reshape(train_num, n_feature)
# 預(yù)測過程中的輸入層數(shù)據(jù)
in_test_data = tensor_tran(in_test_data).to(torch.float)
in_test_data = F.normalize(in_test_data)
in_test_data = in_test_data.reshape(test_num, n_feature)
# 訓(xùn)練過程中的輸出層數(shù)據(jù)
out_train_data = out_train_data.reshape(len(out_train_data), 1)
out_train_data = tensor_tran(out_train_data).to(torch.float)
un_norm1 = out_train_data[0][0]
out_train_data = F.normalize(out_train_data)
norm1 = out_train_data[0][0]
out_train_data = out_train_data.reshape(train_num, n_output)
fanshu_train = round(float(un_norm1 / norm1), 4)  # 建模時,訓(xùn)練數(shù)據(jù)中輸出數(shù)據(jù)的范數(shù)
# 預(yù)測中用于檢驗的輸出層數(shù)據(jù)
out_test_data = out_test_data.reshape(len(out_test_data), 1)
out_test_data = tensor_tran(out_test_data).to(torch.float)
un_norm = out_test_data[0][0]   # 歸一化前
out_test_data = F.normalize(out_test_data)
norm = out_test_data[0][0]  # 歸一化后
out_test_data = out_test_data.reshape(test_num, n_output)
fanshu = round(float(un_norm / norm), 4)  # 預(yù)測時,測試數(shù)據(jù)中輸出數(shù)據(jù)的范數(shù)

# 建模訓(xùn)練數(shù)據(jù)
x_train = in_train_data
y_train = out_train_data
x_test = in_test_data
y_label = out_test_data
# ========================================== #

chrom_sum = []  # 種群,染色體集合
for i in range(size):
    chrom_sum.append(chrom_code.code(chrom_len, bound))
account = 0  # 遺傳迭代次數(shù)計數(shù)器
best_fitness_ls = []  # 每代最優(yōu)適應(yīng)度
ave_fitness_ls = []  # 每代平均適應(yīng)度
best_code = []  # 迭代完成適應(yīng)度最高的編碼值

# 適應(yīng)度計算
fitness_ls = []
for i in range(size):
    fitness = chrom_fitness.calculate_fitness(chrom_sum[i], n_feature, n_hidden, n_output,
                                              epochs, learning_rate, x_train, y_train)
    fitness_ls.append(fitness)
# 收集每次迭代的最優(yōu)適應(yīng)值和平均適應(yīng)值
fitness_array = np.array(fitness_ls).flatten()
fitness_array_sort = fitness_array.copy()
fitness_array_sort.sort()
best_fitness = fitness_array_sort[-1]
best_fitness_ls.append(best_fitness)
ave_fitness_ls.append(fitness_array.sum() / size)

while True:
    # 選擇算子
    # print("\n這是第{}次遺傳迭代。".format(account+1))
    # print("平均適應(yīng)度為:",fitness_array.sum()/size)
    chrom_sum = chrom_select.select(chrom_sum, fitness_ls)
    # 交叉算子
    chrom_sum = chrom_cross.cross(chrom_sum, size, p_cross, chrom_len, bound)
    # 變異算子
    chrom_sum = chrom_mutate.mutate(chrom_sum, size, p_mutate, chrom_len, bound, maxgen, account + 1)
    # 適應(yīng)度計算
    fitness_ls = []
    for i in range(size):
        fitness = chrom_fitness.calculate_fitness(chrom_sum[i], n_feature, n_hidden, n_output,
                                                  epochs, learning_rate, x_train, y_train)
        fitness_ls.append(fitness)
    # 收集每次迭代的最優(yōu)適應(yīng)值和平均適應(yīng)值
    fitness_array = np.array(fitness_ls).flatten()
    fitness_array_sort = fitness_array.copy()
    fitness_array_sort.sort()
    best_fitness = fitness_array_sort[-1]  # 獲取最優(yōu)適應(yīng)度值
    best_fitness_ls.append(best_fitness)
    ave_fitness_ls.append(fitness_array.sum() / size)
    # 計數(shù)器加一
    account = account + 1
    if account == maxgen:
        index = fitness_ls.index(max(fitness_ls))  # 返回最大值的索引
        best_code = chrom_sum[index]  # 通過索引獲得對于染色體
        break

# 參數(shù)提取
hidden_weight = best_code[0:n_feature * n_hidden]
hidden_bias = best_code[n_feature * n_hidden:
                        n_feature * n_hidden + n_hidden]
output_weight = best_code[n_feature * n_hidden + n_hidden:
                          n_feature * n_hidden + n_hidden + n_hidden * n_output]
output_bias = best_code[n_feature * n_hidden + n_hidden + n_hidden * n_output:
                        n_feature * n_hidden + n_hidden + n_hidden * n_output + n_output]
# 類型轉(zhuǎn)換
tensor_tran = transforms.ToTensor()
hidden_weight = tensor_tran(np.array(hidden_weight).reshape((n_hidden, n_feature))).to(torch.float32)
hidden_bias = tensor_tran(np.array(hidden_bias).reshape((1, n_hidden))).to(torch.float32)
output_weight = tensor_tran(np.array(output_weight).reshape((n_output, n_hidden))).to(torch.float32)
output_bias = tensor_tran(np.array(output_bias).reshape((1, n_output))).to(torch.float32)
# 形裝轉(zhuǎn)換
hidden_weight = hidden_weight.reshape((n_hidden, n_feature))
hidden_bias = hidden_bias.reshape(n_hidden)
output_weight = output_weight.reshape((n_output, n_hidden))
output_bias = output_bias.reshape(n_output)
GA = [hidden_weight, hidden_bias, output_weight, output_bias]

# 帶入模型計算
BP_model = BP_network.BP_net(n_feature, n_hidden, n_output, GA)
ini_BP_model = BP_network.ini_BP_net(n_feature, n_hidden, n_output)
# 網(wǎng)絡(luò)訓(xùn)練
loss = BP_network.train(BP_model, epochs, learning_rate, x_train, y_train)
ini_loss = BP_network.train(ini_BP_model, epochs, learning_rate, x_train, y_train)
# 建模效果
model_x = BP_model(x_train)
ini_model_x = ini_BP_model(x_train)
# 網(wǎng)絡(luò)預(yù)測
prediction = BP_model(x_test)
ini_prediction = ini_BP_model(x_test)

# 建模數(shù)據(jù)反歸一化(都換算到厘米級)
y_train = y_train.detach().numpy() * fanshu_train
model_x = model_x.detach().numpy() * fanshu_train
ini_model_x = ini_model_x.detach().numpy() * fanshu_train
# 建模繪圖
train_name_ls = name_ls[6:126]
xlabel = [i for i in range(0, 120, 14)]
plt.plot(y_train, markersize=4, marker='.', label="真值", c='r')
plt.plot(model_x, markersize=4, marker='.', label="GA-BP預(yù)測值", c='b')
plt.title("GA-BP算法建模情況")
plt.ylabel("累計裂縫寬度(mm)")
plt.xticks(xlabel, [train_name_ls[i] for i in xlabel], rotation=25)
plt.grid(linestyle='-.')  # 設(shè)置虛線
plt.legend()

f2 = plt.figure()
plt.plot(y_train, markersize=4, marker='.', label="真值", c='r')
plt.plot(ini_model_x, markersize=4, marker='.', label="BP預(yù)測值", c='g')
plt.title("BP算法建模情況")
plt.ylabel("累計裂縫寬度(mm)")
plt.xticks(xlabel, [train_name_ls[i] for i in xlabel], rotation=25)
plt.grid(linestyle='-.')
plt.legend()

# 預(yù)測數(shù)據(jù)格式轉(zhuǎn)換(厘米級)
GABP_prediction = prediction.detach().numpy()
BP_prediction = ini_prediction.detach().numpy()
y_label = y_label.detach().numpy()
# 預(yù)測數(shù)據(jù)反歸一化(厘米級)
GABP_prediction = GABP_prediction * fanshu
BP_prediction = BP_prediction * fanshu
y_label = y_label * fanshu

# 計算預(yù)測結(jié)果的SSE誤差
def get_MSE(argu1, argu2):
    if len(argu1) != len(argu2):
        return 0
    error = 0
    for i in range(len(argu1)):
        error = error + pow((argu1[i] - argu2[i]), 2)
    error = float(error[0])
    return round(error, 5)


error_BP = get_MSE(y_label, BP_prediction)
error_GA_BP = get_MSE(y_label, GABP_prediction)
print("BP算法預(yù)測MSE誤差為:", error_BP)
print("GA-BP算法預(yù)測MSE誤差為:", error_GA_BP)

# 將巡行情況和運行結(jié)果寫入日志
f = open("log.txt",'a',encoding='UTF-8')     # 追加寫打開文件
f.write("運行時間:" + str(time.ctime()) + '\n')
f.write("訓(xùn)練數(shù)據(jù)長度為:" + str(train_num) + '\n'
        + "測試數(shù)據(jù)長度為:" + str(test_num) + '\n')
f.write("網(wǎng)絡(luò)結(jié)構(gòu)層數(shù)為:{}、{}、{}\n".format(n_feature,n_hidden,n_output))
f.write("遺傳迭代所獲得的最優(yōu)權(quán)值為:" + str(best_code) + "\n")
f.write("======預(yù)測結(jié)果如下======\n真值數(shù)據(jù)為:" + str(y_label.flatten()) + '\n')
f.write("BP預(yù)測結(jié)果為:" + str(BP_prediction.flatten()) + "\n"
        + "GA-BP預(yù)測結(jié)果為:" + str(GABP_prediction.flatten()) + '\n')
f.write("-->>BP預(yù)測MSE誤差為:" + str(error_BP) + '平方厘米\n'
        + "-->>GA-BP預(yù)測MSE誤差為:" + str(error_GA_BP) + '平方厘米\n\n')
f.close()

# 預(yù)測繪圖
test_name_ls = name_ls[126:152]
xlabel2 = [i for i in range(0, 26, 4)]

f3 = plt.figure()
plt.plot(y_label, markersize=4, marker='.', label="真值", c='r')
plt.plot(GABP_prediction, markersize=4, marker='*', label="GA-BP預(yù)測值", c='b')
plt.plot(BP_prediction, markersize=4, marker='^', label="BP預(yù)測值", c='g')
plt.title("算法預(yù)測情況對比")
plt.ylabel("累計裂縫寬度(mm)")
plt.xticks(xlabel2, [test_name_ls[i] for i in xlabel2], rotation=20)
plt.legend()
plt.grid(linestyle='-.')

f4 = plt.figure()
plt.plot(y_label, markersize=4, marker='.', label="真值", c='r')
plt.plot(BP_prediction, markersize=4, marker='^', label="BP預(yù)測值", c='g')
plt.title("BP算法預(yù)測情況")
plt.ylabel("累計裂縫寬度(mm)")
plt.xticks(xlabel2, [test_name_ls[i] for i in xlabel2], rotation=20)
plt.legend()
plt.grid(linestyle='-.')

f5 = plt.figure()
plt.plot(y_label, markersize=4, marker='.', label="真值", c='r')
plt.plot(GABP_prediction, markersize=4, marker='*', label="GA-BP預(yù)測值", c='b')
plt.title("GA-BP算法預(yù)測情況")
plt.ylabel("累計裂縫寬度(mm)")
plt.xticks(xlabel2, [test_name_ls[i] for i in xlabel2], rotation=20)
plt.legend()
plt.grid(linestyle='-.')

plt.show()

對比結(jié)果確實有提升:

基于遺傳算法的bp神經(jīng)網(wǎng)絡(luò)優(yōu)化算法,數(shù)學(xué)建模與算法,神經(jīng)網(wǎng)絡(luò),pytorch

?資源獲?。?/strong>

鏈接:https://pan.baidu.com/s/1ZiqgN98bhnyEdoQxuDB3SQ?pwd=ervf?
提取碼:ervf?
--來自百度網(wǎng)盤超級會員V4的分享


才疏學(xué)淺,水平有限。敬請批評指正!

共勉!文章來源地址http://www.zghlxwxcb.cn/news/detail-780145.html


到了這里,關(guān)于算法介紹及實現(xiàn)——基于遺傳算法改進的BP神經(jīng)網(wǎng)絡(luò)算法(附完整Python實現(xiàn))的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包