前言
本文的主要內(nèi)容是基于 PyTorch 的 cifar-10 圖像分類,文中包括 cifar-10 數(shù)據(jù)集介紹、環(huán)境配置、實(shí)驗(yàn)代碼、運(yùn)行結(jié)果以及遇到的問題這幾個(gè)部分,本實(shí)驗(yàn)采用了基本網(wǎng)絡(luò)和VGG加深網(wǎng)絡(luò)模型,其中VGG加深網(wǎng)絡(luò)模型的識(shí)別準(zhǔn)確率是要優(yōu)于基本網(wǎng)絡(luò)模型的。
一、cifar-10 數(shù)據(jù)集介紹
cifar-10 數(shù)據(jù)集由 60000 張分辨率為 32x32 彩色圖像組成,共分為 10 類,每類包含 6000 張圖像,cifar-10 數(shù)據(jù)集有 50000 個(gè)訓(xùn)練圖像和 10000 個(gè)測(cè)試圖像。
數(shù)據(jù)集分為五個(gè)訓(xùn)練批次和一個(gè)測(cè)試批次,每個(gè)批次包含 10000 張圖像,測(cè)試批次恰好包含從每個(gè)類中隨機(jī)選擇的 1000 張圖像,訓(xùn)練批次以隨機(jī)順序包含其余圖像,但某些訓(xùn)練批處理可能包含來自一個(gè)類的圖像多于另一個(gè)類的圖像,在它們之間,訓(xùn)練批次正好包含來自每個(gè)類的 5000 張圖像。
下面是數(shù)據(jù)集中所包含的類以及每個(gè)類中的 10 個(gè)隨機(jī)圖像。
由上圖可以看到,cifar-10 數(shù)據(jù)集包含飛機(jī)、汽車、鳥、貓、鹿、狗、青蛙、馬、船以及卡車這十類,這些類是完全相互排斥的,汽車和卡車之間也沒有重疊,汽車包括轎車、SUV等諸如此類的東西,卡車僅包括大型卡車,但兩者都不包括皮卡車。
該數(shù)據(jù)集可以在網(wǎng)址 https://www.cs.toronto.edu/~kriz/cifar.html 中進(jìn)行下載,下載解壓后包含以下幾個(gè)文件。
二、環(huán)境配置
先安裝 Anaconda,用來創(chuàng)建需要的環(huán)境,Anaconda 的安裝可以參考:Anaconda 的安裝及使用。
在安裝好的 Anaconda 中安裝 python 和 pytorch 以及代碼中可能用到的包,可以參考:使用 Anaconda 安裝 Pytorch。
在PyCharm中點(diǎn)擊File——>Settings 打開如下界面,找到 Project 下的 Project interpreter ,再點(diǎn)擊右邊的齒輪,選擇 Add。
在彈出的新界面中選擇 Conda Environment,再選擇Existing environment,在Interpreter這里找到你在 Anaconda 中 pytorch 環(huán)境下的 python 即可,然后點(diǎn)擊OK。
可以看到,這里的 Project interpreter 已經(jīng)發(fā)生了變化,點(diǎn)擊 OK 即可。
上面兩幅圖中所包含的就是安裝好python、pytorch以及本實(shí)驗(yàn)所用包后的信息了。
三、實(shí)驗(yàn)代碼
本實(shí)驗(yàn)所用的代碼有兩個(gè),一個(gè)是基于簡(jiǎn)單網(wǎng)絡(luò)的,一個(gè)是基于VGG加深網(wǎng)絡(luò)的。
1.簡(jiǎn)單網(wǎng)絡(luò)的代碼
# 聲明:本代碼并非自己編寫,由他人提供
import torch
import torchvision
import torchvision.transforms as transforms
import ssl
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import numpy as np
import time
ssl._create_default_https_context = ssl._create_unverified_context
transform = transforms.Compose(
[transforms.RandomHorizontalFlip(),
transforms.RandomGrayscale(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./cifar10', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
if __name__ == '__main__':
for epoch in range(20):
timestart = time.time()
running_loss = 0.0
for i,data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 500 == 499:
print('[%d ,%5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 500))
running_loss = 0.0
print('epoch %d cost %3f sec' % (epoch + 1, time.time()-timestart))
print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.__next__()
imshow(torchvision.utils.make_grid(images))
print('GroundTruth:', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data,1)
print('Predicted:', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (100*correct/total))
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
2.VGG加深網(wǎng)絡(luò)的代碼
# 聲明:本代碼并非自己編寫,由他人提供
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import time
import os
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
transform = transforms.Compose(
[
transforms.RandomHorizontalFlip(),
transforms.RandomGrayscale(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
transform1 = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./cifar10_vgg', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./cifar10_vgg', train=False, download=True, transform=transform1)
testloader = torch.utils.data.DataLoader(testset, batch_size=50, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 64, 3, padding=1)
self.conv2 = nn.Conv2d(64, 64, 3, padding=1)
self.pool1 = nn.MaxPool2d(2, 2)
self.bn1 = nn.BatchNorm2d(64)
self.relu1 = nn.ReLU()
self.conv3 = nn.Conv2d(64, 128, 3, padding=1)
self.conv4 = nn.Conv2d(128, 128, 3, padding=1)
self.pool2 = nn.MaxPool2d(2, 2, padding=1)
self.bn2 = nn.BatchNorm2d(128)
self.relu2 = nn.ReLU()
self.conv5 = nn.Conv2d(128, 128, 3, padding=1)
self.conv6 = nn.Conv2d(128, 128, 3, padding=1)
self.conv7 = nn.Conv2d(128, 128, 1, padding=1)
self.pool3 = nn.MaxPool2d(2, 2, padding=1)
self.bn3 = nn.BatchNorm2d(128)
self.relu3 = nn.ReLU()
self.conv8 = nn.Conv2d(128, 256, 3, padding=1)
self.conv9 = nn.Conv2d(256, 256, 3, padding=1)
self.conv10 = nn.Conv2d(256, 256, 1, padding=1)
self.pool4 = nn.MaxPool2d(2, 2, padding=1)
self.bn4 = nn.BatchNorm2d(256)
self.relu4 = nn.ReLU()
self.conv11 = nn.Conv2d(256, 512, 3, padding=1)
self.conv12 = nn.Conv2d(512, 512, 3, padding=1)
self.conv13 = nn.Conv2d(512, 512, 1, padding=1)
self.pool5 = nn.MaxPool2d(2, 2, padding=1)
self.bn5 = nn.BatchNorm2d(512)
self.relu5 = nn.ReLU()
self.fc14 = nn.Linear(512 * 4 * 4, 1024)
self.drop1 = nn.Dropout2d()
self.fc15 = nn.Linear(1024, 1024)
self.drop2 = nn.Dropout2d()
self.fc16 = nn.Linear(1024, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.pool1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.pool2(x)
x = self.bn2(x)
x = self.relu2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.pool3(x)
x = self.bn3(x)
x = self.relu3(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.pool4(x)
x = self.bn4(x)
x = self.relu4(x)
x = self.conv11(x)
x = self.conv12(x)
x = self.conv13(x)
x = self.pool5(x)
x = self.bn5(x)
x = self.relu5(x)
# print(" x shape ",x.size())
x = x.view(-1, 512 * 4 * 4)
x = F.relu(self.fc14(x))
x = self.drop1(x)
x = F.relu(self.fc15(x))
x = self.drop2(x)
x = self.fc16(x)
return x
def train_sgd(self, device):
optimizer = optim.SGD(self.parameters(), lr=0.01)
path = 'weights.tar'
initepoch = 0
if os.path.exists(path) is not True:
loss = nn.CrossEntropyLoss()
else:
checkpoint = torch.load(path)
self.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
initepoch = checkpoint['epoch']
loss = checkpoint['loss']
for epoch in range(initepoch, 20): # loop over the dataset multiple times
timestart = time.time()
running_loss = 0.0
total = 0
correct = 0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = self(inputs)
l = loss(outputs, labels)
l.backward()
optimizer.step()
running_loss += l.item()
if i % 500 == 499:
print('[%d, %5d] loss: %.4f' %
(epoch, i, running_loss / 500))
running_loss = 0.0
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d tran images: %.3f %%' % (total,
100.0 * correct / total))
total = 0
correct = 0
torch.save({'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
}, path)
print('epoch %d cost %3f sec' % (epoch, time.time() - timestart))
print('Finished Training')
def test(self, device):
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = self(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %.3f %%' % (100.0 * correct / total))
def classify(self, device):
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = self(images)
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
if __name__ == '__main__':
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = Net()
net = net.to(device)
net.train_sgd(device)
net.test(device)
net.classify(device)
四、運(yùn)行結(jié)果
基于簡(jiǎn)單網(wǎng)絡(luò)的代碼運(yùn)行過程如下。
代碼運(yùn)行后開始在cifar10的官網(wǎng)下載數(shù)據(jù)集 cifar-10-python.tar.gz 的壓縮包。
下載成功后接著運(yùn)行了20個(gè)epoch。
20個(gè)epoch運(yùn)行完成后彈出了該圖,可以看到畫面是比較模糊的。
關(guān)閉該圖后接著輸出了各類識(shí)別的準(zhǔn)確率。
基于VGG加深網(wǎng)絡(luò)的代碼運(yùn)行過程如下,整個(gè)過程相當(dāng)耗時(shí)。
最終輸出各類識(shí)別的準(zhǔn)確率。
繪制圖對(duì)比一下,基于VGG加深網(wǎng)絡(luò)的整體識(shí)別效果要比簡(jiǎn)單網(wǎng)絡(luò)好很多。
五、遇到的問題
original error was: dll load failed: 找不到指定的模塊。
這個(gè)問題在網(wǎng)上有好多的解決辦法,我自己做了好多嘗試,最后不知道是具體的哪一步起了作用,就可以運(yùn)行程序了,總之將我嘗試的方法都貼在下面吧,希望能夠幫到你!
1、在Anaconda下安裝python3.6版本(之前裝了3.7和3.8都不太好使,有可能也不是版本的問題)。
2、先安裝 matplotlib,再安裝 pytorch(本實(shí)驗(yàn)用到了 matplotlib,我先安裝的這一個(gè))。
3、嘗試過卸載 numpy 再重新安裝(好多人通過這個(gè)方法解決了)。
4、卸載了電腦之前已安裝的 python ,刪除了其對(duì)應(yīng)的環(huán)境變量(可能會(huì)與Anaconda下的python互相影響)。
5、配置 Anaconda 下的 python 環(huán)境變量。
上面的環(huán)境變量按照自己的安裝路徑配置。
6、在 PyCharm 下的Settings中把所有可以改變 Project Interpreter 的地方(下圖左側(cè)框住的這四個(gè))都改為Anaconda 下的 python路徑并保存。
7、看看自己存放 python 模塊的文件夾下是否有之前版本 python 的文件,我這里就有一個(gè)名為_pycache_的文件夾,刪除它。文章來源:http://www.zghlxwxcb.cn/news/detail-424099.html
總結(jié)
以上就是cifar-10圖像分類的所有內(nèi)容了,我在搭建環(huán)境上花費(fèi)的時(shí)間比運(yùn)行程序本身的時(shí)間都要長(zhǎng),所以在這個(gè)過程中遇到問題時(shí)要耐心一點(diǎn),相信你也可以解決問題,讓代碼成功的跑起來!
參考網(wǎng)址:
Alex Krizhevsky的主頁(yè)
https://www.kaggle.com/c/cifar-10文章來源地址http://www.zghlxwxcb.cn/news/detail-424099.html
到了這里,關(guān)于基于 PyTorch 的 cifar-10 圖像分類的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!