目錄
?文章來源地址http://www.zghlxwxcb.cn/news/detail-599918.html
1.ResNet殘差網(wǎng)絡(luò)
1.1 ResNet定義
?1.2 ResNet 幾種網(wǎng)絡(luò)配置
?1.3 ResNet50網(wǎng)絡(luò)結(jié)構(gòu)
1.3.1 前幾層卷積和池化
1.3.2?殘差塊:構(gòu)建深度殘差網(wǎng)絡(luò)
1.3.3?ResNet主體:堆疊多個(gè)殘差塊
1.4 遷移學(xué)習(xí)貓狗二分類實(shí)戰(zhàn)
1.4.1 遷移學(xué)習(xí)
1.4.2 模型訓(xùn)練
1.4.3 模型預(yù)測(cè)
?文章來源:http://www.zghlxwxcb.cn/news/detail-599918.html
1.ResNet殘差網(wǎng)絡(luò)
1.1 ResNet定義
深度學(xué)習(xí)在圖像分類、目標(biāo)檢測(cè)、語音識(shí)別等領(lǐng)域取得了重大突破,但是隨著網(wǎng)絡(luò)層數(shù)的增加,梯度消失和梯度爆炸問題逐漸凸顯。隨著層數(shù)的增加,梯度信息在反向傳播過程中逐漸變小,導(dǎo)致網(wǎng)絡(luò)難以收斂。同時(shí),梯度爆炸問題也會(huì)導(dǎo)致網(wǎng)絡(luò)的參數(shù)更新過大,無法正常收斂。
為了解決這些問題,ResNet提出了一個(gè)創(chuàng)新的思路:引入殘差塊(Residual Block)。殘差塊的設(shè)計(jì)允許網(wǎng)絡(luò)學(xué)習(xí)殘差映射,從而減輕了梯度消失問題,使得網(wǎng)絡(luò)更容易訓(xùn)練。
下圖是一個(gè)基本殘差塊。它的操作是把某層輸入跳躍連接到下一層乃至更深層的激活層之前,同本層輸出一起經(jīng)過激活函數(shù)輸出。
?
?1.2 ResNet 幾種網(wǎng)絡(luò)配置
如下圖:
?1.3 ResNet50網(wǎng)絡(luò)結(jié)構(gòu)
ResNet-50是一個(gè)具有50個(gè)卷積層的深度殘差網(wǎng)絡(luò)。它的網(wǎng)絡(luò)結(jié)構(gòu)非常復(fù)雜,但我們可以將其分為以下幾個(gè)模塊:
1.3.1 前幾層卷積和池化
import torch
import torch.nn as nn
class ResNet50(nn.Module):
def __init__(self, num_classes=1000):
super(ResNet50, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
1.3.2?殘差塊:構(gòu)建深度殘差網(wǎng)絡(luò)
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.conv3 = nn.Conv2d(out_channels, out_channels * 4, kernel_size=1, stride=1, bias=False)
self.bn3 = nn.BatchNorm2d(out_channels * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
1.3.3?ResNet主體:堆疊多個(gè)殘差塊
在ResNet-50中,我們堆疊了多個(gè)殘差塊來構(gòu)建整個(gè)網(wǎng)絡(luò)。每個(gè)殘差塊會(huì)將輸入的特征圖進(jìn)行處理,并輸出更加豐富的特征圖。堆疊多個(gè)殘差塊允許網(wǎng)絡(luò)在深度方向上進(jìn)行信息的層層提取,從而獲得更高級(jí)的語義信息。代碼如下:
class ResNet50(nn.Module):
def __init__(self, num_classes=1000):
# ... 前幾層代碼 ...
# 4個(gè)殘差塊的block1
self.layer1 = self._make_layer(ResidualBlock, 64, 3, stride=1)
# 4個(gè)殘差塊的block2
self.layer2 = self._make_layer(ResidualBlock, 128, 4, stride=2)
# 4個(gè)殘差塊的block3
self.layer3 = self._make_layer(ResidualBlock, 256, 6, stride=2)
# 4個(gè)殘差塊的block4
self.layer4 = self._make_layer(ResidualBlock, 512, 3, stride=2)
?利用make_layer函數(shù)實(shí)現(xiàn)對(duì)基本殘差塊Bottleneck的堆疊。代碼如下:
def _make_layer(self, block, channel, block_num, stride=1):
"""
block: 堆疊的基本模塊
channel: 每個(gè)stage中堆疊模塊的第一個(gè)卷積的卷積核個(gè)數(shù),對(duì)resnet50分別是:64,128,256,512
block_num: 當(dāng)期stage堆疊block個(gè)數(shù)
stride: 默認(rèn)卷積步長(zhǎng)
"""
downsample = None # 用于控制shorcut路的
if stride != 1 or self.in_channel != channel*block.expansion: # 對(duì)resnet50:conv2中特征圖尺寸H,W不需要下采樣/2,但是通道數(shù)x4,因此shortcut通道數(shù)也需要x4。對(duì)其余conv3,4,5,既要特征圖尺寸H,W/2,又要shortcut維度x4
downsample = nn.Sequential(
nn.Conv2d(in_channels=self.in_channel, out_channels=channel*block.expansion, kernel_size=1, stride=stride, bias=False), # out_channels決定輸出通道數(shù)x4,stride決定特征圖尺寸H,W/2
nn.BatchNorm2d(num_features=channel*block.expansion))
layers = [] # 每一個(gè)convi_x的結(jié)構(gòu)保存在一個(gè)layers列表中,i={2,3,4,5}
layers.append(block(in_channel=self.in_channel, out_channel=channel, downsample=downsample, stride=stride)) # 定義convi_x中的第一個(gè)殘差塊,只有第一個(gè)需要設(shè)置downsample和stride
self.in_channel = channel*block.expansion # 在下一次調(diào)用_make_layer函數(shù)的時(shí)候,self.in_channel已經(jīng)x4
for _ in range(1, block_num): # 通過循環(huán)堆疊其余殘差塊(堆疊了剩余的block_num-1個(gè))
layers.append(block(in_channel=self.in_channel, out_channel=channel))
return nn.Sequential(*layers) # '*'的作用是將list轉(zhuǎn)換為非關(guān)鍵字參數(shù)傳入
1.4 遷移學(xué)習(xí)貓狗二分類實(shí)戰(zhàn)
1.4.1 遷移學(xué)習(xí)
遷移學(xué)習(xí)(Transfer Learning)是一種機(jī)器學(xué)習(xí)和深度學(xué)習(xí)技術(shù),它允許我們將一個(gè)任務(wù)學(xué)到的知識(shí)或特征遷移到另一個(gè)相關(guān)的任務(wù)中,從而加速模型的訓(xùn)練和提高性能。在遷移學(xué)習(xí)中,我們通常利用已經(jīng)在大規(guī)模數(shù)據(jù)集上預(yù)訓(xùn)練好的模型(稱為源任務(wù)模型),將其權(quán)重用于新的任務(wù)(稱為目標(biāo)任務(wù)),而不是從頭開始訓(xùn)練一個(gè)全新的模型。
遷移學(xué)習(xí)的核心思想是:在解決一個(gè)新任務(wù)之前,我們可以先從已經(jīng)學(xué)習(xí)過的任務(wù)中獲取一些通用的特征或知識(shí),并將這些特征或知識(shí)遷移到新任務(wù)中。這樣做的好處在于,源任務(wù)模型已經(jīng)在大規(guī)模數(shù)據(jù)集上進(jìn)行了充分訓(xùn)練,學(xué)到了很多通用的特征,例如邊緣檢測(cè)、紋理等,這些特征對(duì)于許多任務(wù)都是有用的。
1.4.2 模型訓(xùn)練
首先,我們需要準(zhǔn)備用于貓狗二分類的數(shù)據(jù)集。數(shù)據(jù)集可以從Kaggle上下載,其中包含了大量的貓和狗的圖片。
在下載數(shù)據(jù)集后,我們需要將數(shù)據(jù)集劃分為訓(xùn)練集和測(cè)試集。訓(xùn)練集文件夾命名為train,其中建立兩個(gè)文件夾分別為cat和dog,每個(gè)文件夾里存放相應(yīng)類別的圖片。測(cè)試集命名為test,同理。然后我們使用ResNet50網(wǎng)絡(luò)模型,在我們的計(jì)算機(jī)上使用GPU進(jìn)行訓(xùn)練并保存我們的模型,訓(xùn)練完成后在測(cè)試集上驗(yàn)證模型預(yù)測(cè)的正確率。
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, Dataset
from torchvision.datasets import ImageFolder
from torchvision.models import resnet50
# 設(shè)置隨機(jī)種子
torch.manual_seed(42)
# 定義超參數(shù)
batch_size = 32
learning_rate = 0.001
num_epochs = 10
# 定義數(shù)據(jù)轉(zhuǎn)換
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# 加載數(shù)據(jù)集
train_dataset = ImageFolder("train", transform=transform)
test_dataset = ImageFolder("test", transform=transform)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
# 加載預(yù)訓(xùn)練的ResNet-50模型
model = resnet50(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2) # 替換最后一層全連接層,以適應(yīng)二分類問題
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9)
# 訓(xùn)練模型
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# 前向傳播
outputs = model(images)
loss = criterion(outputs, labels)
# 反向傳播和優(yōu)化
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print(f"Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{total_step}], Loss: {loss.item()}")
torch.save(model,'model/c.pth')
# 測(cè)試模型
model.eval()
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
print(outputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
break
print(f"Accuracy on test images: {(correct / total) * 100}%")
1.4.3 模型預(yù)測(cè)
首先加載我們保存的模型,這里我們進(jìn)行單張圖片的預(yù)測(cè),并把預(yù)測(cè)結(jié)果打印日志。
import cv2 as cv
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import torchvision.transforms as transforms
import torch
from PIL import Image
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model=torch.load('model/c.pth')
print(model)
model.to(device)
test_image_path = 'test/dogs/dog.4001.jpg' # Replace with your test image path
image = Image.open(test_image_path)
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
input_tensor = transform(image).unsqueeze(0).to(device) # Add a batch dimension and move to GPU
# Set the model to evaluation mode
model.eval()
with torch.no_grad():
outputs = model(input_tensor)
_, predicted = torch.max(outputs, 1)
predicted_label = predicted.item()
label=['貓','狗']
print(label[predicted_label])
plt.axis('off')
plt.imshow(image)
plt.show()
運(yùn)行截圖
至此這篇文章到此結(jié)束。
?
?
到了這里,關(guān)于Pytorch遷移學(xué)習(xí)使用Resnet50進(jìn)行模型訓(xùn)練預(yù)測(cè)貓狗二分類的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!