一、PyTorch環(huán)境的配置及安裝
1.官網(wǎng)下載最新版Anaconda,完成后打開Anaconda Prompt,顯示(base)即安裝成功
2.conda create -n pytorch python=3.6
建立一個(gè)命名為pytorch的環(huán)境,且環(huán)境python版本為3.6
3.conda activate pytorch
激活并進(jìn)入pytorch這個(gè)環(huán)境;linux:source activate pytorch
4.pip list
來查看環(huán)境內(nèi)安裝了哪些包,可以發(fā)現(xiàn)并沒有我們需要的pytorch
5.打開PyTorch官網(wǎng),直接找到最新版pytorch指令conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
(無腦最新版就完事了。。。。老版本調(diào)了半天,最后還出問題了),打開pytorch環(huán)境,輸入指令下載安裝
6.檢驗(yàn)是否安裝成功。輸入python
,import torch
不報(bào)錯(cuò)即pytorch安裝成功。輸入torch.cuda.is_available()
,若返回True即機(jī)器顯卡是可以被pytorch使用的(如失敗,建議去英偉達(dá)官網(wǎng)下載更新驅(qū)動(dòng)程序,并刪除環(huán)境,使用各種最新版重新安裝)。
7.linux服務(wù)器安裝時(shí)出現(xiàn)環(huán)境安裝不到conda/envs下,而在.conda下,進(jìn)行如下操作
other:conda info -e (查看所有的虛擬環(huán)境)
刪除環(huán)境:
第一步:首先退出環(huán)境
conda deactivate
第二步:刪除環(huán)境
conda remove -n 需要?jiǎng)h除的環(huán)境名 --all
rm -rf + 文件名 刪除文件夾
df -h查看linux系統(tǒng)各分區(qū)的情況
nohup 命令 > 文件 2>&1 & # 使模型在后臺(tái)訓(xùn)練 exit退出黑窗口
1.> 會(huì)重寫文件,如果文件里面有內(nèi)容會(huì)覆蓋,沒有則創(chuàng)建并寫入。
2.>> 將內(nèi)容追加到文件中,即如果文件里面有內(nèi)容會(huì)把新內(nèi)容追加到文件尾,如果文件不存在,就創(chuàng)建文件
kill -9 PID # 關(guān)閉特定進(jìn)程
tar -xvf #解壓tar包
查看當(dāng)前文件夾的大?。篸u -ah
查看當(dāng)前文件夾下面各個(gè)文件夾的大?。篸u -ah --max-depth=1
anaconda下的pkgs怎么清理:conda clean -a
ps u pid 查詢顯卡誰在使用
sudo chmod -R 777 myResources 修改文件的權(quán)限為所有用戶擁有最高權(quán)限
pip install *** -i https://pypi.tuna.tsinghua.edu.cn/simple 鏡像加速安裝
ps -f -p 26359 可以看到進(jìn)程26359在跑訓(xùn)練
cp -r /TEST/test1 /TEST/test2 復(fù)制文件夾
Defaulting to user installation because normal site-packages is not writeable : python3 -m pip install requests
fuser -v /dev/nvidia* nvidia-smi 無進(jìn)程占用GPU,但GPU顯存卻被占用了
二、Pycharm、jupyter的安裝
1. Pycharm
1.pycharm官網(wǎng)下載安裝
2.新建項(xiàng)目(lean_pytorch),
點(diǎn)擊已存在的編譯器,點(diǎn)進(jìn)去尋找剛剛我們安裝好的環(huán)境。
導(dǎo)入成功。
2.jupyter
- 安裝好anaconda后無需再次安裝。
- jupyter默認(rèn)安裝在base環(huán)境中,所以我們需要在pytorch環(huán)境中安裝jupyter.
- 進(jìn)入pytorch環(huán)境,輸入
conda install nb_conda
安裝juypter - 安裝完成后輸入
juypter notebook
即可打開。 -
新建pytorch環(huán)境下的juypter文件。 - 輸入
import torch
,torch.cuda.is_available()
,返回TRUE即安裝成功。
三、Python學(xué)習(xí)中的兩大法寶函數(shù)(help、dir)
進(jìn)入pycharm的python console,輸入dir(torch),dir(torch.cuda),dir(torch.cuda.is_available()),help(torch.cuda.is_available)。
四、加載數(shù)據(jù)(Dataset)
from torch.utils.data import Dataset, DataLoader
import numpy as np
from PIL import Image
import os
from torchvision import transforms
from torch.utils.tensorboard import SummaryWriter
from torchvision.utils import make_grid
writer = SummaryWriter("logs")
class MyData(Dataset):
def __init__(self, root_dir, image_dir, label_dir, transform):
self.root_dir = root_dir
self.image_dir = image_dir
self.label_dir = label_dir
self.label_path = os.path.join(self.root_dir, self.label_dir)
self.image_path = os.path.join(self.root_dir, self.image_dir)
self.image_list = os.listdir(self.image_path)
self.label_list = os.listdir(self.label_path)
self.transform = transform
# 因?yàn)閘abel 和 Image文件名相同,進(jìn)行一樣的排序,可以保證取出的數(shù)據(jù)和label是一一對(duì)應(yīng)的
self.image_list.sort()
self.label_list.sort()
def __getitem__(self, idx):
img_name = self.image_list[idx]
label_name = self.label_list[idx]
img_item_path = os.path.join(self.root_dir, self.image_dir, img_name)
label_item_path = os.path.join(self.root_dir, self.label_dir, label_name)
img = Image.open(img_item_path)
with open(label_item_path, 'r') as f:
label = f.readline()
# img = np.array(img)
img = self.transform(img)
sample = {'img': img, 'label': label}
return sample
def __len__(self):
assert len(self.image_list) == len(self.label_list)
return len(self.image_list)
if __name__ == '__main__':
transform = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor()])
root_dir = "dataset/train"
image_ants = "ants_image"
label_ants = "ants_label"
ants_dataset = MyData(root_dir, image_ants, label_ants, transform)
image_bees = "bees_image"
label_bees = "bees_label"
bees_dataset = MyData(root_dir, image_bees, label_bees, transform)
train_dataset = ants_dataset + bees_dataset
# transforms = transforms.Compose([transforms.Resize(256, 256)])
dataloader = DataLoader(train_dataset, batch_size=1, num_workers=2)
writer.add_image('error', train_dataset[119]['img'])
writer.close()
# for i, j in enumerate(dataloader):
# # imgs, labels = j
# print(type(j))
# print(i, j['img'].shape)
# # writer.add_image("train_data_b2", make_grid(j['img']), i)
#
# writer.close()
五、TensorBorad的使用
安裝tensorborad:pip install tensorboard
更改端口:
六、Transformer
進(jìn)入structure
1.compose
將幾個(gè)步驟合為一個(gè)
2.toTensor
將PIL和numpy類型的圖片轉(zhuǎn)為Tensor(可用于訓(xùn)練)
__call__的使用:
ctrl+p提示函數(shù)參數(shù)
3.Normalize
講一個(gè)tensor類型進(jìn)行歸一化
4.Resize
tips:
七、torchvision中數(shù)據(jù)集的使用
torchvision 是PyTorch中專門用來處理圖像的庫。這個(gè)包中有四個(gè)大類。
torchvision.datasets
torchvision.models
torchvision.transforms
torchvision.utils
這里主要介紹前三個(gè)。
1.torchvision.datasets
八、dataloader
drop_last=true,舍去最后的余數(shù)圖片,如上半張圖片將會(huì)舍去,下半張圖片為FALSE
九、nn.module
十、卷積操作
十一、卷積層
import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)
def forward(self, x):
x = self.conv1(x)
return x
tudui = Tudui()
writer = SummaryWriter("../logs")
step = 0
for data in dataloader:
imgs, targets = data
output = tudui(imgs)
print(imgs.shape)
print(output.shape)
# torch.Size([64, 3, 32, 32])
writer.add_images("input", imgs, step)
# torch.Size([64, 6, 30, 30]) -> [xxx, 3, 30, 30]
output = torch.reshape(output, (-1, 3, 30, 30))
writer.add_images("output", output, step)
step = step + 1
十二、池化層
import torch
import torchvision
from torch import nn
from torch.nn import MaxPool2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10("../data", train=False, download=True,
transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=False)
def forward(self, input):
output = self.maxpool1(input)
return output
tudui = Tudui()
writer = SummaryWriter("../logs_maxpool")
step = 0
for data in dataloader:
imgs, targets = data
writer.add_images("input", imgs, step)
output = tudui(imgs)
writer.add_images("output", output, step)
step = step + 1
writer.close()
十三、非線性激活
input = torch.tensor([[1, -0.5],
[-1, 3]])
input = torch.reshape(input, (-1, 1, 2, 2))
print(input.shape)
dataset = torchvision.datasets.CIFAR10("../data", train=False, download=True,
transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.relu1 = ReLU()
self.sigmoid1 = Sigmoid()
def forward(self, input):
output = self.sigmoid1(input)
return output
tudui = Tudui()
writer = SummaryWriter("../logs_relu")
step = 0
for data in dataloader:
imgs, targets = data
writer.add_images("input", imgs, global_step=step)
output = tudui(imgs)
writer.add_images("output", output, step)
step += 1
writer.close()
十四、線性層
import torch
import torchvision
from torch import nn
from torch.nn import Linear
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.linear1 = Linear(196608, 10)
def forward(self, input):
output = self.linear1(input)
return output
tudui = Tudui()
for data in dataloader:
imgs, targets = data
print(imgs.shape)
output = torch.flatten(imgs)
print(output.shape)
output = tudui(output)
print(output.shape)
十五、Sequential
import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.tensorboard import SummaryWriter
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
tudui = Tudui()
print(tudui)
input = torch.ones((64, 3, 32, 32))
output = tudui(input)
print(output.shape)
writer = SummaryWriter("../logs_seq")
writer.add_graph(tudui, input)
writer.close()
十六、損失函數(shù)和反向傳播
1.損失函數(shù)
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=1)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
loss = nn.CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
imgs, targets = data
outputs = tudui(imgs)
result_loss = loss(outputs, targets)
print("ok")
2.反向傳播及優(yōu)化
import torch
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.optim.lr_scheduler import StepLR
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
download=True)
dataloader = DataLoader(dataset, batch_size=1)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 32, 5, padding=2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self, x):
x = self.model1(x)
return x
loss = nn.CrossEntropyLoss()
tudui = Tudui()
optim = torch.optim.SGD(tudui.parameters(), lr=0.01)
scheduler = StepLR(optim, step_size=5, gamma=0.1)
for epoch in range(20):
running_loss = 0.0
for data in dataloader:
imgs, targets = data
outputs = tudui(imgs)
result_loss = loss(outputs, targets)
optim.zero_grad()
result_loss.backward()
scheduler.step()
running_loss = running_loss + result_loss
print(running_loss)
十七、現(xiàn)有模型的使用及修改
import torchvision
# train_data = torchvision.datasets.ImageNet("../data_image_net", split='train', download=True,
# transform=torchvision.transforms.ToTensor())
from torch import nn
vgg16_false = torchvision.models.vgg16(pretrained=False)
vgg16_true = torchvision.models.vgg16(pretrained=True)
print(vgg16_true)
train_data = torchvision.datasets.CIFAR10('../data', train=True, transform=torchvision.transforms.ToTensor(),
download=True)
vgg16_true.classifier.add_module('add_linear', nn.Linear(1000, 10))
print(vgg16_true)
print(vgg16_false)
vgg16_false.classifier[6] = nn.Linear(4096, 10)
print(vgg16_false)
十八、網(wǎng)絡(luò)模型的保存和修改
1.保存
import torch
import torchvision
from torch import nn
vgg16 = torchvision.models.vgg16(pretrained=False)
# 保存方式1,模型結(jié)構(gòu)+模型參數(shù)
torch.save(vgg16, "vgg16_method1.pth")
# 保存方式2,模型參數(shù)(官方推薦)
torch.save(vgg16.state_dict(), "vgg16_method2.pth")
# 陷阱
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3)
def forward(self, x):
x = self.conv1(x)
return x
tudui = Tudui()
torch.save(tudui, "tudui_method1.pth")
2.讀取
import torch
from model_save import *
# 方式1-》保存方式1,加載模型
import torchvision
from torch import nn
model = torch.load("vgg16_method1.pth")
# print(model)
# 方式2,加載模型
vgg16 = torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_method2.pth"))
# model = torch.load("vgg16_method2.pth")
# print(vgg16)
# 陷阱1
# class Tudui(nn.Module):
# def __init__(self):
# super(Tudui, self).__init__()
# self.conv1 = nn.Conv2d(3, 64, kernel_size=3)
#
# def forward(self, x):
# x = self.conv1(x)
# return x
model = torch.load('tudui_method1.pth')
print(model)
只用方式2!?。。?/p>
十九、完整的模型訓(xùn)練套路
import torchvision
from my_model import *
from torch.utils.tensorboard import SummaryWriter
#準(zhǔn)備數(shù)據(jù)集
from torch import nn
from torch.utils.data import DataLoader
train_data = torchvision.datasets.CIFAR10(root="../data",train=True,transform=torchvision.transforms.ToTensor(),download=True)
test_data = torchvision.datasets.CIFAR10(root="../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
# length 長(zhǎng)度
train_data_size = len(train_data)
test_data_size = len(test_data)
# 如果train_data_size=10,訓(xùn)練數(shù)據(jù)集的長(zhǎng)度為:10
print("訓(xùn)練數(shù)據(jù)集的長(zhǎng)度為:{}".format(train_data_size))
print("測(cè)試數(shù)據(jù)集的長(zhǎng)度為:{}".format(test_data_size))
#利用 DataLoader 來加載數(shù)據(jù)集
train_dataloader = DataLoader(train_data,batch_size=64)
test_dataloader = DataLoader(test_data,batch_size=64)
#創(chuàng)建網(wǎng)絡(luò)模型
tudui = Tudui()
#損失函數(shù)
loss_fn = nn.CrossEntropyLoss()
#優(yōu)化器
learning_rate = 1e-2
optimizer = torch.optim.SGD(tudui.parameters(),lr=learning_rate)
#訓(xùn)練網(wǎng)絡(luò)的一些參數(shù)
#記錄訓(xùn)練的次數(shù)
total_train_step = 0
#記錄測(cè)試的次數(shù)
total_test_step = 0
#訓(xùn)練的輪數(shù)
epoch = 10
#添加tensorboard
writer = SummaryWriter("../logs_train")
for i in range(epoch):
print("-----------第{}輪訓(xùn)練開始-----------".format(i+1))
#訓(xùn)練步驟開始
tudui.train()
for data in train_dataloader:
imgs,targets = data
outputs = tudui(imgs)
loss = loss_fn(outputs,targets)
#優(yōu)化器優(yōu)化模型
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_train_step += 1
if total_train_step % 100 == 0:
print("訓(xùn)練次數(shù):{},Loss:{}".format(total_train_step,loss.item()))
writer.add_scalar("train_loss",loss.item(),total_train_step)
# 測(cè)試步驟開始
tudui.eval()
total_test_loss = 0
total_accuracy = 0
with torch.no_grad():
for data in test_dataloader:
imgs,targets = data
outputs = tudui(imgs)
loss = loss_fn(outputs,targets)
total_test_loss += loss
accuracy = (outputs.argmax(1)==targets).sum()
total_accuracy += accuracy
print("整體集上的Loss:{}".format(total_test_loss))
print("整體數(shù)據(jù)集上的正確率:{}".format(total_accuracy/test_data_size))
writer.add_scalar("test_loss",total_test_loss,total_test_step)
writer.add_scalar("test_accuracy",total_accuracy/test_data_size,total_test_step)
total_test_step += 1
torch.save(tudui,"tudui_{}.pth".format(i))
#torch.save(tudui.state_dict(),"tudui_{}".format(i))
print("模型已保存")
writer.close()
二十、利用GPU訓(xùn)練
文章來源:http://www.zghlxwxcb.cn/news/detail-570732.html
import torchvision
from torch.utils.tensorboard import SummaryWriter
import torch
import time
#準(zhǔn)備數(shù)據(jù)集
from torch import nn
from torch.utils.data import DataLoader
device = torch.device("cuda")
train_data = torchvision.datasets.CIFAR10(root="../data",train=True,transform=torchvision.transforms.ToTensor(),download=True)
test_data = torchvision.datasets.CIFAR10(root="../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
# length 長(zhǎng)度
train_data_size = len(train_data)
test_data_size = len(test_data)
# 如果train_data_size=10,訓(xùn)練數(shù)據(jù)集的長(zhǎng)度為:10
print("訓(xùn)練數(shù)據(jù)集的長(zhǎng)度為:{}".format(train_data_size))
print("測(cè)試數(shù)據(jù)集的長(zhǎng)度為:{}".format(test_data_size))
#利用 DataLoader 來加載數(shù)據(jù)集
train_dataloader = DataLoader(train_data,batch_size=64)
test_dataloader = DataLoader(test_data,batch_size=64)
#創(chuàng)建網(wǎng)絡(luò)模型
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model = nn.Sequential(
nn.Conv2d(3,32,5,1,2),
nn.MaxPool2d(2),
nn.Conv2d(32,32,5,1,2),
nn.MaxPool2d(2),
nn.Conv2d(32,64,5,1,2),
nn.MaxPool2d(2),
nn.Flatten(),
nn.Linear(64*4*4,64),
nn.Linear(64,10)
)
def forward(self,x):
x=self.model(x)
return x
tudui = Tudui()
tudui=tudui.to(device)
#損失函數(shù)
loss_fn = nn.CrossEntropyLoss()
loss_fn = loss_fn.to(device)
#優(yōu)化器
learning_rate = 1e-2
optimizer = torch.optim.SGD(tudui.parameters(),lr=learning_rate)
#訓(xùn)練網(wǎng)絡(luò)的一些參數(shù)
#記錄訓(xùn)練的次數(shù)
total_train_step = 0
#記錄測(cè)試的次數(shù)
total_test_step = 0
#訓(xùn)練的輪數(shù)
epoch = 10
#添加tensorboard
writer = SummaryWriter("../logs_train")
start_time=time.time()
for i in range(epoch):
print("-----------第{}輪訓(xùn)練開始-----------".format(i+1))
#訓(xùn)練步驟開始
tudui.train()
for data in train_dataloader:
imgs,targets = data
imgs = imgs.to(device)
targets = targets.to(device)
outputs = tudui(imgs)
loss = loss_fn(outputs,targets)
#優(yōu)化器優(yōu)化模型
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_train_step += 1
if total_train_step % 100 == 0:
end_time = time.time()
print(end_time-start_time)
print("訓(xùn)練次數(shù):{},Loss:{}".format(total_train_step,loss.item()))
writer.add_scalar("train_loss",loss.item(),total_train_step)
# 測(cè)試步驟開始
tudui.eval()
total_test_loss = 0
total_accuracy = 0
with torch.no_grad():
for data in test_dataloader:
imgs,targets = data
imgs = imgs.to(device)
targets = targets.to(device)
outputs = tudui(imgs)
loss = loss_fn(outputs,targets)
total_test_loss += loss
accuracy = (outputs.argmax(1)==targets).sum()
total_accuracy += accuracy
print("整體集上的Loss:{}".format(total_test_loss))
print("整體數(shù)據(jù)集上的正確率:{}".format(total_accuracy/test_data_size))
writer.add_scalar("test_loss",total_test_loss,total_test_step)
writer.add_scalar("test_accuracy",total_accuracy/test_data_size,total_test_step)
total_test_step += 1
torch.save(tudui,"tudui_{}.pth".format(i))
#torch.save(tudui.state_dict(),"tudui_{}".format(i))
print("模型已保存")
writer.close()
二十一、完整的模型驗(yàn)證套路
# -*- coding: utf-8 -*-
# 作者:小土堆
# 公眾號(hào):土堆碎念
import torch
import torchvision
from PIL import Image
from torch import nn
image_path = "../imgs/airplane.png"
image = Image.open(image_path)
print(image)
image = image.convert('RGB') # 因?yàn)閜ng格式是四通道,除了RGB三通道外,還有一個(gè)透明度通道,
# 調(diào)用convert保留其顏色通道。當(dāng)然,如果圖片本來就是三個(gè)顏色通道,經(jīng)此操作,不變。加上這一步可以適應(yīng)png jpg各種格式的圖片
transform = torchvision.transforms.Compose([torchvision.transforms.Resize((32, 32)),
torchvision.transforms.ToTensor()])
image = transform(image)
print(image.shape)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model = nn.Sequential(
nn.Conv2d(3, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 32, 5, 1, 2),
nn.MaxPool2d(2),
nn.Conv2d(32, 64, 5, 1, 2),
nn.MaxPool2d(2),
nn.Flatten(),
nn.Linear(64*4*4, 64),
nn.Linear(64, 10)
)
def forward(self, x):
x = self.model(x)
return x
model = torch.load("tudui_29_gpu.pth", map_location=torch.device('cpu'))
print(model)
image = torch.reshape(image, (1, 3, 32, 32))
model.eval()
with torch.no_grad():
output = model(image)
print(output)
print(output.argmax(1))
總結(jié)
。文章來源地址http://www.zghlxwxcb.cn/news/detail-570732.html
到了這里,關(guān)于PyTorch深度學(xué)習(xí)快速入門教程(絕對(duì)通俗易懂?。。。┑奈恼戮徒榻B完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!