概念
L1 正則化(Lasso Regularization):L1 正則化通過(guò)在損失函數(shù)中添加參數(shù)的絕對(duì)值之和作為懲罰項(xiàng),促使部分參數(shù)變?yōu)榱?,?shí)現(xiàn)特征選擇。適用于稀疏性特征選擇問(wèn)題。
L2 正則化(Ridge Regularization):L2 正則化通過(guò)在損失函數(shù)中添加參數(shù)的平方和作為懲罰項(xiàng),使得參數(shù)值保持較小。適用于減小參數(shù)大小,減輕參數(shù)之間的相關(guān)性。
彈性網(wǎng)絡(luò)正則化(Elastic Net Regularization):彈性網(wǎng)絡(luò)是 L1 正則化和 L2 正則化的結(jié)合,綜合了兩者的優(yōu)勢(shì)。適用于同時(shí)進(jìn)行特征選擇和參數(shù)限制。
數(shù)據(jù)增強(qiáng)(Data Augmentation):數(shù)據(jù)增強(qiáng)是通過(guò)對(duì)訓(xùn)練數(shù)據(jù)進(jìn)行隨機(jī)變換來(lái)擴(kuò)展數(shù)據(jù)集,以提供更多的樣本。這有助于模型更好地泛化到不同的數(shù)據(jù)變化。
早停(Early Stopping):早停是一種簡(jiǎn)單的正則化方法,它通過(guò)在訓(xùn)練過(guò)程中監(jiān)控驗(yàn)證集上的性能,并在性能不再改善時(shí)停止訓(xùn)練,從而避免模型過(guò)擬合訓(xùn)練數(shù)據(jù)。
批標(biāo)準(zhǔn)化(Batch Normalization):批標(biāo)準(zhǔn)化是一種在每個(gè)小批次數(shù)據(jù)上進(jìn)行標(biāo)準(zhǔn)化的技術(shù),有助于穩(wěn)定網(wǎng)絡(luò)的訓(xùn)練,減少內(nèi)部協(xié)變量偏移,也可以視為一種正則化方法。
權(quán)重衰減(Weight Decay):權(quán)重衰減是在損失函數(shù)中添加參數(shù)的權(quán)重平方和或權(quán)重絕對(duì)值之和,以限制參數(shù)的大小。文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-652096.html
DropConnect:類似于 Dropout,DropConnect 隨機(jī)地將神經(jīng)元與其輸入連接斷開(kāi),而不是將神經(jīng)元的輸出置為零。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-652096.html
代碼實(shí)現(xiàn)
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
# 加載數(shù)據(jù)
data = load_iris()
X = data.data
y = data.target
# 數(shù)據(jù)預(yù)處理
scaler = StandardScaler()
X = scaler.fit_transform(X)
y = keras.utils.to_categorical(y, num_classes=3)
# 劃分訓(xùn)練集和測(cè)試集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 定義模型
def build_model(regularization=None):
model = keras.Sequential([
layers.Input(shape=(X_train.shape[1],)),
layers.Dense(64, activation='relu', kernel_regularizer=regularization),
layers.Dense(32, activation='relu', kernel_regularizer=regularization),
layers.Dense(3, activation='softmax')
])
return model
# L1 正則化
model_l1 = build_model(keras.regularizers.l1(0.01))
model_l1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_l1.fit(X_train, y_train, epochs=50, batch_size=8, validation_split=0.1)
# L2 正則化
model_l2 = build_model(keras.regularizers.l2(0.01))
model_l2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_l2.fit(X_train, y_train, epochs=50, batch_size=8, validation_split=0.1)
# 彈性網(wǎng)絡(luò)正則化
model_elastic = build_model(keras.regularizers.l1_l2(l1=0.01, l2=0.01))
model_elastic.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_elastic.fit(X_train, y_train, epochs=50, batch_size=8, validation_split=0.1)
# 早停(Early Stopping)
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)
model_early = build_model()
model_early.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_early.fit(X_train, y_train, epochs=100, batch_size=8, validation_split=0.1, callbacks=[early_stopping])
# 評(píng)估模型
print("L1 Regularization:")
model_l1.evaluate(X_test, y_test)
print("L2 Regularization:")
model_l2.evaluate(X_test, y_test)
print("Elastic Net Regularization:")
model_elastic.evaluate(X_test, y_test)
print("Early Stopping:")
model_early.evaluate(X_test, y_test)
到了這里,關(guān)于神經(jīng)網(wǎng)絡(luò)基礎(chǔ)-神經(jīng)網(wǎng)絡(luò)補(bǔ)充概念-37-其他正則化方法的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!