国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

PyTorch深度學(xué)習(xí)快速入門教程【小土堆】 學(xué)習(xí)筆記

這篇具有很好參考價(jià)值的文章主要介紹了PyTorch深度學(xué)習(xí)快速入門教程【小土堆】 學(xué)習(xí)筆記。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

學(xué)習(xí)視頻地址 —— B站

PyTorch深度學(xué)習(xí)快速入門教程(絕對(duì)通俗易懂?。拘⊥炼选?br>

1 pyTorch安裝和配置

徹底卸載anaconda

conda install anaconda-clean
anaconda-clean --yes
再用控制面板

anaconda 卸載環(huán)境 :conda uninstall -n yyy --all

  • anaconda 安裝路徑:D:\anaconda3
  • 創(chuàng)建環(huán)境: conda create -n pytorch python=3.9
  • pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)
  • 切換環(huán)境 : conda activate pytorch
  • 查看目前已經(jīng)安裝的工具包:pip list

Q 安裝pytorch?

  1. 進(jìn)入pytorch首頁 下拉,https://pytorch.org/
  2. 查看gpu型號(hào): 任務(wù)管理器-性能 - gpu
  3. pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)
  4. 首先更改下載源,,然后,使用官網(wǎng)上的指令安裝,直接安裝torch一直下載不下來

已經(jīng)測(cè)試,速度很慢,30kB/s

 conda create -n pytorch python=3.9
conda activate pytorch

conda config --remove-key channels  
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud

版權(quán)聲明:本文為CSDN博主「StarryHuangx」的原創(chuàng)文章,遵循CC 4.0 BY-SA版權(quán)協(xié)議,轉(zhuǎn)載請(qǐng)附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/weixin_46017950/article/details/123280653

pip3 install torch torchvision torchaudio

已經(jīng)測(cè)試,最有效的辦法,速度最快,3MB/s

conda create -n pytorch python=3.9
conda activate pytorch


conda config --set show_channel_urls yes
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/ 


#使用的這個(gè) —— 未成功
conda install pytorch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1 -c pytorch

# 用的這——未成功
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html


# 這個(gè) 未成功
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html

pip install torch==1.10.1+cu102 torchvision==0.11.2+cu102 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu102/torch_stable.html


# 這個(gè)可以,但是gpu太low, cuda 版本太高,,gpu上無法訓(xùn)練
pip install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio===0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html  

# 11.1
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html

pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html

作者:insightya https://www.bilibili.com/read/cv15186754/ 出處:bilibili

2 python 編輯器的安裝

pyCharm 安裝注意事項(xiàng)

  • 在 Installation Options 中,,勾選 .py

pyCharm 配置

  1. 打開
  2. 新建項(xiàng)目 的過程中
  3. 選擇 Existing interpreter
  4. 手動(dòng)找到編譯器,, 選擇 conda Environment ->
  5. 輸入: D:/anaconda/envs/pytroch/python.exe

檢測(cè)是否正常運(yùn)行

  1. 點(diǎn) “” python console “”
import torch
torch.cuda.is_available()

默認(rèn) Jupiter 安裝在 base 環(huán)境中,,所以,現(xiàn)在 要在 新環(huán)境中 在安裝一遍 jupiter。

新環(huán)境中安裝jupyter

# open  anaconda命令行
conda activate pytorch
conda list
conda install nb_conda
jupyter notebook

jupyter 使用指南

import torch
torch.cuda.is_available()

shift + 回車: 表示 創(chuàng)建一個(gè)新的代碼塊,并且運(yùn)行上一個(gè)代碼塊

3 為什么 返回 false

我的是true ,跳過

4 python 學(xué)習(xí)中的兩個(gè) 法寶函數(shù)

dir() : 打開
help() : 說明書

dir(pytorch)

5 pyCharm 和 Jupyter使用及對(duì)比

pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)

6 pytorch 加載數(shù)據(jù)

Dataset
提供 獲取數(shù)據(jù)及其lable的方法
能得到 總共有 多少數(shù)據(jù)
能獲取 每一個(gè)數(shù)據(jù) 及其 label

Dataloader
把 數(shù)據(jù)打包,,方便 后續(xù)使用

代開 Jupyter

conda activate pytorch
jupyter notebook

7 Dataset 實(shí)戰(zhàn)

安裝opencv

conda activate pytorch
conda install opencv-python
from torch.utils.data import Dataset
from PIL import Image
import os


class MyData(Dataset):

    def __init__(self, root_dir, label_dir):
        self.root_dir = root_dir
        self.label_dir = label_dir
        self.path = os.path.join(self.root_dir, self.label_dir)
        self.img_path = os.listdir(self.path)

    def __getitem__(self, idx):
        img_name = self.img_path[idx]
        img_item_path = os.path.join( self.root_dir, self.label_dir, img_name )
        img = Image.open(img_item_path)
        label = self.label_dir
        return img, label

    def __len__(self):
        return len(self.img_path)

root_dir = "hymenoptera_data/hymenoptera_data/train"
ants_label_dir = "ants"
bees_label_dir = "bees"
ants_dataset = MyData(root_dir, ants_label_dir)
bees_dataset = MyData(root_dir, bees_label_dir)

train_dataset = ants_dataset + bees_dataset

8 Tensorboard 使用 數(shù)據(jù)可視化

SummaryWriter使用

  1. 安裝tensorboard
pip install tensorboard
  1. 報(bào)錯(cuò)
    pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)
    包不匹配,解決辦法如下
 pip install setuptools==59.5.0
conda install tensorboard

運(yùn)行結(jié)果是,,多了一個(gè)Logs文件夾

數(shù)據(jù)可視化:
一定要切換到Logs的上一層目錄中

conda activate pytorch

D:
cd D:\pycharm-workplace\pythonProject

# 切換到 Logs的上一層目錄中
tensorboard --logdir=logs

9 tensorboard使用2

利用opencv讀取文件,轉(zhuǎn)化為Numpy數(shù)組

pip install opencv-python
from torch.utils.tensorboard import SummaryWriter
import numpy as np
from PIL import Image

writer = SummaryWriter("logs")
image_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img_PIL = Image.open(image_path)
img_array = np.array(img_PIL)


writer.add_image("lyy", img_array, 1, dataformats='HWC')

for i in range(100):
    writer.add_scalar("y=2x", 3*i, i)

writer.close()

print("end")

10 Transfroms 使用1——主要是對(duì)圖片進(jìn)行變換

ToTensor()使用—— 把 imgge 或 numpy 轉(zhuǎn)換為 tensor

from PIL import Image
from torchvision import transforms

img_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img = Image.open(img_path)

# tensor_trans 是一個(gè)類
tensor_trans = transforms.ToTensor() # 吧 imgge 或 numpy 轉(zhuǎn)換為 tensor
tensor_img = tensor_trans(img)

print(tensor_img)

11 Transfroms 使用2——主要是對(duì)圖片進(jìn)行變換

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

img_path = "hymenoptera_data/hymenoptera_data/train/ants/0013035.jpg"
img = Image.open(img_path)


writer = SummaryWriter("logs")


tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)

# print(tensor_img)

writer.add_image("lyy", tensor_img)
writer.close()


12 常見transforms使用

normalize

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

img_path = "hymenoptera_data/16.jpg"
img = Image.open(img_path)

writer = SummaryWriter("logs")

# totensor使用
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("lyy", tensor_img)

# Normalize 使用
trans_norm = transforms.Normalize([111,111,111],[10,10,10])
img_norm = trans_norm(tensor_img)
writer.add_image("normalize", img_norm, 2)

writer.close()

print("end")

13 常見transforms使用2

注意事項(xiàng)

關(guān)注輸入輸出,看官方文檔,關(guān)注方法的參數(shù)
pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)

from PIL import Image
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

img_path = "hymenoptera_data/16.jpg"
img = Image.open(img_path)


writer = SummaryWriter("logs")


# totensor使用
tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)
writer.add_image("lyy", tensor_img)

# Normalize 使用
trans_norm = transforms.Normalize([111,111,111],[10,10,10])
img_norm = trans_norm(tensor_img)
writer.add_image("normalize", img_norm, 2)

# resize
trans_resize = transforms.Resize( (512, 512))
img_resize = trans_resize(img) # 輸入的是Image類型的圖像
img_resize_tensor = transforms.ToTensor()

tensor_img = tensor_trans(img_resize)
writer.add_image("resize", tensor_img, 0)

# compose 用法
trans_resize_2 =transforms.Resize(123)
trans_compose = transforms.Compose([trans_resize_2, tensor_trans])
img_resize_2 = trans_compose(img)
writer.add_image("resize2", img_resize_2,1)

# randomCrop
trans_random = transforms.RandomCrop((20,50))
trans_compose_2 = transforms.Compose([trans_random, tensor_trans])
for i in range(10):
    img_crop = trans_compose_2(img)
    writer.add_image("randomCrop", img_crop, i)


writer.close()

print("end")

14 torchvision數(shù)據(jù)集使用

下載網(wǎng)上的數(shù)據(jù)集,及使用

import torchvision
from torch.utils.tensorboard import SummaryWriter

dataset_transfrom = torchvision.transforms.Compose([
    torchvision.transforms.ToTensor()
])

train_set = torchvision.datasets.CIFAR10(root="./cifar10", train=True, transform=dataset_transfrom, download=True)
test_set = torchvision.datasets.CIFAR10(root="./cifar10", train=False, transform=dataset_transfrom, download=True)

writer = SummaryWriter("p10")
for i in range(10):
    img, target = test_set[i]
    writer.add_image("test_set", img, i)

writer.close()

DataLoader 使用

Dataset 有數(shù)據(jù),和標(biāo)簽
DataLoader,吧數(shù)據(jù) 加載 到神經(jīng)網(wǎng)絡(luò)

import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

test_data = torchvision.datasets.CIFAR10("./cifar10", train=False, transform=torchvision.transforms.ToTensor())
test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=False)

writer = SummaryWriter("dataloader")

step = 0
for data in test_loader:
    imgs, targets = data
    writer.add_images("dataloader", imgs, step)
    step = step + 1

writer.close()\

17 卷積操作

import torch
import torch.nn.functional as F

input = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]])
kernal = torch.tensor([[1, 2, 1],
                       [0, 1, 0],
                       [2, 1, 0]])

input = torch.reshape(input, (1,1,5,5))
kernal = torch.reshape(kernal, (1,1,3,3))

print(input.shape)
print(kernal.shape)

output = F.conv2d(input, kernal, stride=2, padding=1)
print(output)

18 神經(jīng)網(wǎng)絡(luò) —— 卷積層

import torch
import torchvision
from torch.utils.data import DataLoader
from torch.nn import Conv2d
from torch import nn
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("./cifar10",train=False,transform=torchvision.transforms.ToTensor(), download=False)
dataloader = DataLoader(dataset, batch_size=64)

class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.conv1 = Conv2d(3, 6, 3, 1)

    def forward(self, x):
        x = self.conv1(x)
        return x

lyy = Lyy()

writer = SummaryWriter("logs")
step = 0
for data in dataloader:
    imgs, targets = data
    output = lyy(imgs)
    writer.add_images("input", imgs, step)

    output = torch.reshape(output, (-1,3,30,30))
    writer.add_images("output", output, step)

    step += 1

19 最大池化層

移動(dòng)步長(zhǎng),默認(rèn)是 卷積核的大小

import torch
from torch.nn import Conv2d, MaxPool2d
from torch import nn
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10('./cifar10', train=False, transform=torchvision.transforms.ToTensor(),download=False)
dataloader = DataLoader(dataset, 64, False)


input = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]], dtype=torch.float32)
kernal = torch.tensor([[1, 2, 1],
                       [0, 1, 0],
                       [2, 1, 0]])

input = torch.reshape(input, (1,1,5,5))
# kernal = torch.reshape(kernal, (1,1,3,3))

class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
         # self.conv1 = Conv2d(3, 3, 3, 1,padding=1)
        self.maxpool =MaxPool2d(kernel_size=3, ceil_mode=True)

    def forward(self, x):
        # x = self.conv1(x)
        x = self.maxpool(x)
        return x

lyy = Lyy()
# output = lyy(input)

writer = SummaryWriter("logs")
step = 0
for data in dataloader:
    imgs, tagerts = data
    writer.add_images("input", imgs, step)
    output = lyy(imgs)
    writer.add_images("output", output, step)
    step += 1

writer.close()

20 非線性激活

import torch
import torchvision.datasets
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

input = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]], dtype=torch.float32)

input = torch.reshape(input, (-1,1, 5, 5))


dataset = torchvision.datasets.CIFAR10("./cifar10", False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, 64)


class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.relu1 = ReLU()
        self.sigmoid = Sigmoid()

    def forward(self, input):
        return self.sigmoid(input)

lyy = Lyy()


step = 0
writer = SummaryWriter("logs")

for data in dataloader:
    imgs, targets = data
    writer.add_images("input", imgs, step)
    output = lyy(imgs)
    writer.add_images("output", output, step)
    step += 1

writer.close()

print("end")

21 線性層 和 其它層

為正確運(yùn)行

import torch
import torchvision
from torch.nn import Linear
from torch.utils.data import DataLoader
from torch import nn
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)


class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        super.linear1 = Linear(196608,10)

    def forward(self, input):
        output = self.linear1(input)
        return output

lyy = Lyy()

write = SummaryWriter("logs")

step = 0
for data in dataloader:
    imgs, targers = data
    print(imgs.shape)
    write.add_images("input", imgs, step)
    output = torch.reshape(imgs, (1,1,1, -1))
    print(output.shape)

    write.add_images("output", output, step)
write.close()

22 神經(jīng)網(wǎng)絡(luò)搭建小實(shí)戰(zhàn)

計(jì)算 Padding 和 Stride 是多少?

Dilation 默認(rèn)等于 1
pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)
pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)

RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x1024 and 10240x64) 原因

pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)
原因: 網(wǎng)絡(luò)參數(shù)設(shè)置不合適??!

from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch

class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.conv1 = Conv2d(3, 32, 5, padding=2)
        self.maxpool1 = MaxPool2d(2)
        self.conv2 = Conv2d(32, 32, 5, padding=2)
        self.maxpool2 = MaxPool2d(2)
        self.conv3 = Conv2d(32, 64, 5, padding=2)
        self.maxpool3 = MaxPool2d(2)
        self.flatten = Flatten()
        self.linear1 = Linear(1024, 64)
        self.linear2 = Linear(64, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)
        return x

lyy = Lyy()
input = torch.ones((64,3,32,32))
print(input.shape)
output = lyy(input)
print(output.shape)

上述代碼的,等價(jià)替換版本

from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch

class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

lyy = Lyy()
input = torch.ones((64,3,32,32))
print(input.shape)
output = lyy(input)
print(output.shape)

可以用tensorboard繪制流程圖

writer = SummaryWriter('logs')
writer.add_graph(lyy, input)
writer.close()

23 損失函數(shù)與反向傳播

分類問題經(jīng)常用 交叉熵?fù)p失函數(shù)

import torch
from torch.nn import L1Loss
from torch import nn

inputs = torch.tensor([1,2,3], dtype=float)
outputs = torch.tensor([1,2,5], dtype=float)

inputs = torch.reshape(inputs, [1,1,1,3])
outputs = torch.reshape(outputs, (1,1,1,3))

loss = L1Loss(reduction='sum')
result = loss(inputs, outputs)
print(result)


loss_mse = nn.MSELoss()
result = loss_mse(inputs, outputs)
print(result)


# 交叉熵?fù)p失函數(shù)
x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, (1, 3))
loss_cross = nn.CrossEntropyLoss()
print(loss_cross)
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)

class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

loss = nn.CrossEntropyLoss()
lyy=Lyy()
for data in dataloader:
    imgs, targets = data
    outputs = lyy(imgs)
    result_loss = loss(outputs, targets)
    result_loss.backward()
    print(result_loss)

24 優(yōu)化器

import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
import torch


dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)


class Lyy(nn.Module):
    def __init__(self):
        super(Lyy, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

loss = nn.CrossEntropyLoss()
lyy=Lyy()
optis = torch.optim.SGD(lyy.parameters(), lr=0.1)

for epoch in range(10):
    running_loss = 0
    for data in dataloader:
        imgs, targets = data
        outputs = lyy(imgs)
        result_loss = loss(outputs, targets)

        optis.zero_grad()
        result_loss.backward() # 設(shè)置對(duì)應(yīng)的梯度
        optis.step()
        running_loss += result_loss
    print(running_loss)


25 現(xiàn)有網(wǎng)絡(luò)模型的使用及修改

AGG模型 + imageNet數(shù)據(jù)集

1. 安裝 scipy包

2. pretrained參數(shù),表示,網(wǎng)絡(luò)是否訓(xùn)練好

import torchvision.models
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
import torch

dataset = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor())
#dataloader = DataLoader(dataset, batch_size=64)

vgg16_true = torchvision.models.vgg16(pretrained=True)
vgg16_false = torchvision.models.vgg16(pretrained=False)
# 改造vgg16

print(vgg16_false)
# 更改 vgg16, 增加一層
vgg16_true.classifier.add_module('add_linear', nn.Linear(1000, 10))
print(vgg16_true)

print(vgg16_false)
vgg16_false.classifier[6] = nn.Linear(4096, 10)
print(vgg16_false)

26 網(wǎng)絡(luò)模型的保存與讀取

方式1 和 方式2 都差不多,推薦 方式2

model_save

import torchvision
import torch

vgg16 = torchvision.models.vgg16(pretrained=False)

# 保存方式1 : 即保存模型結(jié)構(gòu),,也保存了參數(shù)
torch.save(vgg16, "vgg16_model1.pth")

# 保存方式2 : 把參數(shù)保存成字典,不保存結(jié)構(gòu) (官方推薦)
torch.save(vgg16.state_dict(), "vgg16_model2.pth")

print("end")

model_load

import torch
import torchvision

# 加載方式1 - 保存方式1
# model = torch.load("vgg16_model1.pth")
# # print(model)


# 加載方式2
vgg16 = torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_model2.pth"))
print(vgg16)

27,28,29 完整的訓(xùn)練 套路

argmax() 針對(duì)分類問題很有效

pytorch入門教程,機(jī)器學(xué)習(xí),pytorch,深度學(xué)習(xí),學(xué)習(xí)

import torch
from torch.utils.tensorboard import SummaryWriter

from model_self import *
import torchvision
from torch.nn import Conv2d
from torch.optim import SGD
from torch.utils.data import DataLoader
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear

# 準(zhǔn)備數(shù)據(jù)及
train_data = torchvision.datasets.CIFAR10('./cifar10', True, transform=torchvision.transforms.ToTensor(),download=False)
test_data = torchvision.datasets.CIFAR10('./cifar10', False, transform=torchvision.transforms.ToTensor(),download=False)


# 求長(zhǎng)度
train_data_size = len(train_data)
test_data_size = len(test_data)
print("訓(xùn)練數(shù)據(jù)及長(zhǎng)度:{}".format(train_data_size))
print("測(cè)試數(shù)據(jù)集長(zhǎng)度:{}".format(test_data_size))

# 加載數(shù)據(jù)及
train_dataloader = DataLoader(train_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)


# 搭建網(wǎng)絡(luò)


# 創(chuàng)建網(wǎng)絡(luò)模型
lyy = Lyy()

# 創(chuàng)建損失函數(shù)
loss_fn = nn.CrossEntropyLoss()

# 優(yōu)化器
learning_rate = 1e-2
optimizer = torch.optim.SGD(lyy.parameters(),lr=learning_rate)


# 設(shè)置訓(xùn)練網(wǎng)絡(luò)參數(shù)
total_train_step = 0
total_test_step = 0
epoch = 10


# 添加tensorboard
writer = SummaryWriter("logs")


for i in range(epoch):
    print("-----第{}輪訓(xùn)練開始了-----".format(i+1))

    # 訓(xùn)練步驟開始
    for data in train_dataloader:
        imgs, tragets = data
        output = lyy(imgs)
        loss = loss_fn(output, tragets)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step += 1
        if total_train_step % 100 == 0:
            print("訓(xùn)練次數(shù):{},Loss:{}".format(total_train_step, loss.item()))
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    # 測(cè)試步驟開始
    total_test_loss = 0
    total_accuracy = 0

    with torch.no_grad():
        for data in test_dataloader:
            imgs, tragets = data
            output = lyy(imgs)
            loss = loss_fn(output, tragets)
            total_test_loss += loss

            accuracy = (output.argmax(1) -- tragets).sum()
            total_accuracy += accuracy

    print("整體測(cè)試機(jī)上誤差:{}".format(total_test_loss))
    print("整體測(cè)試機(jī)上的正確率:{}".format(total_accuracy/test_data_size))
    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy/total_test_step)
    total_test_step += 1

    # torch.save(lyy, "lyy_{}.pth".format(i))
    # print("模型已保存")

writer.close()

30 利用GPU訓(xùn)練

找到 網(wǎng)絡(luò)模型, 數(shù)據(jù), 損失函數(shù),調(diào)用.cuda()文章來源地址http://www.zghlxwxcb.cn/news/detail-733858.html

到了這里,關(guān)于PyTorch深度學(xué)習(xí)快速入門教程【小土堆】 學(xué)習(xí)筆記的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包