1. 卷積神經(jīng)網(wǎng)絡(luò)CNN
卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Network,CNN)是一種深度學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)的架構(gòu),主要用于圖像識(shí)別、圖像分類和計(jì)算機(jī)視覺等任務(wù)。它是由多層神經(jīng)元組成的神經(jīng)網(wǎng)絡(luò),其中包含卷積層、池化層和全連接層等組件。
CNN的設(shè)計(jì)受到了生物視覺系統(tǒng)的啟發(fā),其中最重要的組件是卷積層。卷積層通過使用一系列稱為卷積核(或過濾器)的小矩陣,對(duì)輸入圖像進(jìn)行卷積操作。這個(gè)卷積操作可以理解為滑動(dòng)窗口在輸入圖像上的移動(dòng),對(duì)窗口中的圖像部分和卷積核進(jìn)行逐元素相乘并相加,從而生成輸出特征圖。這個(gè)過程可以有效地提取輸入圖像中的局部特征,例如邊緣、紋理等信息。
隨后,通常會(huì)應(yīng)用池化層來降低特征圖的空間維度,減少模型中的參數(shù)數(shù)量,以及提取更加抽象的特征。常見的池化操作包括最大池化和平均池化,它們分別選擇局部區(qū)域中的最大值或平均值作為池化后的值。
最后,通過一個(gè)或多個(gè)全連接層對(duì)池化后的特征進(jìn)行處理,將其映射到特定的輸出類別。全連接層通常是傳統(tǒng)的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),其輸出用于執(zhí)行分類、回歸或其他任務(wù)。
卷積神經(jīng)網(wǎng)絡(luò)在圖像處理領(lǐng)域表現(xiàn)出色,因?yàn)樗鼈兡軌蜃詣?dòng)從原始像素中學(xué)習(xí)特征,并且能夠處理大量數(shù)據(jù),從而實(shí)現(xiàn)較高的準(zhǔn)確性。在過去的幾年里,CNN在計(jì)算機(jī)視覺和其他領(lǐng)域的許多任務(wù)上取得了顯著的突破,成為深度學(xué)習(xí)的重要組成部分。
2.?tf.keras.layers.Conv1D
tf.keras.layers.Conv1D(
filters,
kernel_size,
strides=1,
padding="valid",
data_format="channels_last",
dilation_rate=1,
groups=1,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
一維卷積層(例如時(shí)間卷積(temporal convolution))。
該層創(chuàng)建一個(gè)卷積核,該卷積核與單個(gè)空間(或時(shí)間)維度上的層輸入進(jìn)行卷積,以產(chǎn)生輸出張量。 如果 use_bias 為 True,則創(chuàng)建偏差向量并將其添加到輸出中。 最后,如果激活不是 None,它也會(huì)應(yīng)用于輸出。
當(dāng)將此層用作模型中的第一層時(shí),請(qǐng)?zhí)峁?input_shape 參數(shù)(整數(shù)元組或 None,例如 (10, 128) 表示 10 個(gè) 128 維向量的向量序列,或 (None, 128) 表示可變長度 128 維向量的序列。
3. 例子
3.1 簡單的一層卷積網(wǎng)絡(luò)
定義一個(gè)一維的卷積,卷積核的shape的(,2),輸入的shape是(None, 1)。 biase沒有,filter是1.??
定義輸入數(shù)據(jù)和卷積核,然后輸入到卷積網(wǎng)絡(luò)中,輸出結(jié)果。
def case1():
# Create a Conv1D model
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=1, kernel_size=2, activation='linear', use_bias=False,
input_shape=(None, 1)),
])
model.summary()
# Input sequence and filter
input_sequence = np.array([1, 2, 3, 4, 5, 6])
filter_kernel = np.array([2, -1])
# Reshape the input sequence and filter to fit Conv1D
input_sequence = input_sequence.reshape(1, -1, 1)
filter_kernel = filter_kernel.reshape(-1, 1, 1)
# Set the weights of the Conv1D layer to the filter_kernel
model.layers[0].set_weights([filter_kernel])
# Perform 1D Convolution
output_sequence = model.predict(input_sequence).flatten()
print("Input Sequence:", input_sequence.flatten(), "shape:", input_sequence.shape)
print("Filter:", filter_kernel.flatten(), " shape :",filter_kernel.shape )
print("Output Sequence:", output_sequence)
if __name__ == '__main__':
case1()
輸出
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
1/1 [==============================] - 0s 121ms/step
Input Sequence: [1 2 3 4 5 6] shape: (1, 6, 1)
Filter: [ 2 -1] shape : (2, 1, 1)
Output Sequence: [0. 1. 2. 3. 4.]
Process finished with exit code 0
3.2 . 自定激活函數(shù)
為了驗(yàn)證激活函數(shù)是在卷積后調(diào)用,?特寫下面代碼。你們可以根據(jù)輸入和輸出做校驗(yàn)。
def case_custom_activation():
# Input sequence and filter
input_sequence = np.array([1, 2, 3, 4, 5, 6])
filter_kernel = np.array([2, -1])
# Reshape the input sequence and filter to fit Conv1D
input_sequence = input_sequence.reshape(1, -1, 1)
filter_kernel = filter_kernel.reshape(-1, 1, 1)
def custom_activation(x):
# return tf.square(tf.nn.tanh(x))
return tf.square(x)
# Create a Conv1D model
model = keras.Sequential([
keras.layers.Conv1D(filters=1, kernel_size=2, activation=custom_activation, use_bias=False,
input_shape=(None, 1)),
])
model.summary()
# Set the weights of the Conv1D layer to the filter_kernel
model.layers[0].set_weights([filter_kernel])
# Perform 1D Convolution
output_sequence = model.predict(input_sequence).flatten()
print("Input Sequence:", input_sequence.flatten(), "shape:", input_sequence.shape)
print("Filter:", filter_kernel.flatten(), " shape :",filter_kernel.shape )
print("Output Sequence:", output_sequence)
if __name__ == '__main__':
case_custom_activation()
輸出
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
1/1 [==============================] - 0s 57ms/step
Input Sequence: [1 2 3 4 5 6] shape: (1, 6, 1)
Filter: [ 2 -1] shape : (2, 1, 1)
Output Sequence: [ 0. 1. 4. 9. 16.]
3.3. 驗(yàn)證偏置
和上面代碼唯一不同是,定義了偏置。文章來源:http://www.zghlxwxcb.cn/news/detail-614939.html
def cnn1d_biase():
# Input sequence and filter
input_sequence = np.array([1, 2, 3, 4, 5, 6])
filter_kernel = np.array([2, -1])
biase = np.array([2])
# Reshape the input sequence and filter to fit Conv1D
input_sequence = input_sequence.reshape(1, -1, 1)
filter_kernel = filter_kernel.reshape(-1, 1, 1)
def custom_activation(x):
# return tf.square(tf.nn.tanh(x))
return tf.square(x)
# Create a Conv1D model
model = keras.Sequential([
keras.layers.Conv1D(filters=1, kernel_size=2, activation=custom_activation,
input_shape=(None, 1)),
])
model.summary()
print(model.layers[0].get_weights()[0].shape)
print(model.layers[0].get_weights()[1].shape)
# Set the weights of the Conv1D layer to the filter_kernel
model.layers[0].set_weights([filter_kernel, biase])
# Perform 1D Convolution
output_sequence = model.predict(input_sequence).flatten()
print("Input Sequence:", input_sequence.flatten(), "shape:", input_sequence.shape)
print("Filter:", filter_kernel.flatten(), " shape :", filter_kernel.shape)
print("Output Sequence:", output_sequence)
if __name__ == '__main__':
cnn1d_biase()
輸出文章來源地址http://www.zghlxwxcb.cn/news/detail-614939.html
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, None, 1) 3
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
(2, 1, 1)
(1,)
1/1 [==============================] - 0s 60ms/step
Input Sequence: [1 2 3 4 5 6] shape: (1, 6, 1)
Filter: [ 2 -1] shape : (2, 1, 1)
Output Sequence: [ 4. 9. 16. 25. 36.]
Process finished with exit code 0
到了這里,關(guān)于邊寫代碼邊學(xué)習(xí)之卷積神經(jīng)網(wǎng)絡(luò)CNN的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!