国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù)

這篇具有很好參考價值的文章主要介紹了PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

前言

  • ResNet34模型,做圖像分類
  • 數(shù)據(jù)使用水果圖片數(shù)據(jù)集,下載見Kaggle Fruits Dataset (Images)
  • Kaggle的Notebook示例見 PyTorch——ResNet34模型和Fruits數(shù)據(jù)
  • 下面見代碼

導(dǎo)包

from PIL import Image
import os
import random
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
from torch.nn import functional as F
from torchvision import transforms as T
from torchvision.datasets import ImageFolder
import tqdm

數(shù)據(jù)探索查看

  • 查看圖像
path = "/kaggle/input/fruits-dataset-images/images"
fruit_path = "apple fruit"
apple_files = os.listdir(path + "/" + fruit_path)

Image.open(path + "/"+fruit_path+"/" + apple_files[2])

PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù),# PyTorch,MachineLearning,pytorch,人工智能,深度學(xué)習(xí),ResNet34,Residual

  • 展示多張圖片
def show_images(n_rows, n_cols, x_data):
    assert n_rows * n_cols <= len(x_data)
    
    plt.figure(figsize=(n_cols * 1.5, n_rows * 1.5))
    for row in range(n_rows):
        for col in range(n_cols):
            index = row * n_cols + col
            plt.subplot(n_rows, n_cols, index + 1)
            plt.imshow(x_data[index][0], cmap="binary", interpolation="nearest")  # 圖像
            plt.axis("off")
            plt.title(x_data[index][1])  # 標(biāo)簽
    plt.show()
   
def show_fruit_imgs(fruit, cols, rows):
    files = os.listdir(path + "/" + fruit)
    images = []
    for _ in range(cols * rows):
        file = files[random.randint(0, len(files) -1)]
        image = Image.open(path + "/" + fruit + "/" + file)
        label = file.split(".")[0]
        images.append((image, label))
    show_images(cols, rows, images)
  • 蘋果
show_fruit_imgs("apple fruit", 3, 3)

PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù),# PyTorch,MachineLearning,pytorch,人工智能,深度學(xué)習(xí),ResNet34,Residual

  • 櫻桃
show_fruit_imgs("cherry fruit", 3, 3)

PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù),# PyTorch,MachineLearning,pytorch,人工智能,深度學(xué)習(xí),ResNet34,Residual

數(shù)據(jù)集構(gòu)建

  • 直接使用ImageFolder加載數(shù)據(jù),按目錄解析水果類別
transforms = T.Compose([
    T.Resize(224),
    T.CenterCrop(224),
    T.ToTensor(),
    T.Normalize(mean=[5., 5., 5.], std=[.5, .5, .5])
])

train_dataset = ImageFolder(path, transform=transforms)
classification = os.listdir(path)

train_dataset[2]
  • 輸出如下
(tensor([[[-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          ...,
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.]],
 
         [[-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          ...,
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.]],
 
         [[-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          ...,
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.],
          [-8., -8., -8.,  ..., -8., -8., -8.]]]),
 0)

構(gòu)建模型 ResNet34

  • ResidualBlock
class ResidualBlock(nn.Module):
    
    def __init__(self, in_channels, out_channels, stride=1):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=(3, 3), stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=(3, 3), stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channels)
        self.is_shortcut = stride > 1
        self.shortcut = None if not self.is_shortcut else self._shortcut(in_channels, out_channels, stride)
    
    def forward(self, X):
        out = self.conv1(X)
        out = self.bn1(out)
        out = F.relu(out, inplace=True)
        out = self.conv2(out)
        out = self.bn2(out)
        # 當(dāng)X的維度和out不一致時,需要用shortcut處理X
        out += X if not self.shortcut else self.shortcut(X)
        out = F.relu(out)
        return out
    
    def _shortcut(self, in_channels, out_channels, stride):
        return nn.Sequential(
            nn.Conv2d(in_channels, out_channels, 1, stride, bias=False),
            nn.BatchNorm2d(out_channels)
        )
  • ResNet34
class ResNet34(nn.Module):
    
    def __init__(self, num_classes=2):
        super().__init__()
        self.pre = nn.Sequential(
            nn.Conv2d(3, 64, 7, 2, 3, bias=False),  # 64 * 112 * 112
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(3, 2, 1)  # 64 * 56 * 56
        )
        # layer1 不需要shortcut,因為圖像沒變化(kernel_size=3,stride=1, padding=1)
        self.layer1 = self._make_layer(64, 64, 3, 1)
        self.layer2 = self._make_layer(64, 128, 4, 2)
        self.layer3 = self._make_layer(128, 256, 6, 2)
        self.layer4 = self._make_layer(256, 512, 3, 2)
        self.fc = nn.Linear(512, num_classes)
        
    def _make_layer(self, in_channels, out_channels, block_num, stride):
        layers = [ResidualBlock(in_channels, out_channels, stride)]
        for i in range(1, block_num):
            layers.append(ResidualBlock(out_channels, out_channels))
        return nn.Sequential(*layers)
    
    def forward(self, X):
        # X: 3 * 224 * 224
        out = self.pre(X)                # 64 * 56 * 56
        out = self.layer1(out)           # 64 * 56 * 56
        out = self.layer2(out)           # 128 * 28 * 28
        out = self.layer3(out)           # 256 * 14 * 14
        out = self.layer4(out)           # 512 * 7 * 7
        out = F.avg_pool2d(out, 7)       # 512 * 1 * 1
        out = out.view(out.size(0), -1)  # 512 
        out = self.fc(out)               # len(classification)
        return out

模型訓(xùn)練

  • 準(zhǔn)備代碼
def pad(num, target) -> str:
    """
    將num長度與target對齊
    """
    return str(num).zfill(len(str(target)))

# 參數(shù)配置
epoch_num = 50
batch_size = 32
learning_rate = 0.0005

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# 數(shù)據(jù)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4)

# 構(gòu)建模型
model = ResNet34(len(classification)).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

print(model)
ResNet34(
  (pre): Sequential(
    (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
    (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  )
  (layer1): Sequential(
    (0): ResidualBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): ResidualBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (2): ResidualBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): ResidualBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (right): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): ResidualBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (2): ResidualBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (3): ResidualBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): ResidualBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (right): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): ResidualBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (2): ResidualBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (3): ResidualBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (4): ResidualBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (5): ResidualBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): ResidualBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (right): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): ResidualBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (2): ResidualBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (fc): Linear(in_features=512, out_features=9, bias=True)
)
  • 開始訓(xùn)練
# 開始訓(xùn)練
train_loss_list = []
total_step = len(train_loader)
for epoch in range(1, epoch_num + 1):
    model.train
    train_total_loss, train_total, train_correct  = 0, 0, 0
    train_progress = tqdm.tqdm(train_loader, desc="Train...")
    for i, (X, y) in enumerate(train_progress, 1):
        X, y = X.to(device), y.to(device)
        
        out = model(X)
        loss = criterion(out, y)
        
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()
        
        _, pred = torch.max(out, 1)
        train_total += y.size(0)
        train_correct += (pred == y).sum().item()
        train_total_loss += loss.item()

        train_progress.set_description(f"Train... [epoch {pad(epoch, epoch_num)}/{epoch_num}, loss {(train_total_loss / i):.4f}, accuracy {train_correct / train_total:.4f}]")
    train_loss_list.append(train_total_loss / total_step) 
Train... [epoch 01/50, loss 2.3034, accuracy 0.2006]: 100%|██████████| 12/12 [00:15<00:00,  1.32s/it]
Train... [epoch 02/50, loss 1.9193, accuracy 0.3064]: 100%|██████████| 12/12 [00:16<00:00,  1.36s/it]
Train... [epoch 03/50, loss 1.6338, accuracy 0.3482]: 100%|██████████| 12/12 [00:15<00:00,  1.30s/it]
Train... [epoch 04/50, loss 1.6031, accuracy 0.3649]: 100%|██████████| 12/12 [00:16<00:00,  1.38s/it]
Train... [epoch 05/50, loss 1.5298, accuracy 0.4401]: 100%|██████████| 12/12 [00:15<00:00,  1.31s/it]
Train... [epoch 06/50, loss 1.4189, accuracy 0.4429]: 100%|██████████| 12/12 [00:16<00:00,  1.34s/it]
Train... [epoch 07/50, loss 1.5439, accuracy 0.4708]: 100%|██████████| 12/12 [00:15<00:00,  1.31s/it]
Train... [epoch 08/50, loss 1.4378, accuracy 0.4596]: 100%|██████████| 12/12 [00:16<00:00,  1.36s/it]
Train... [epoch 09/50, loss 1.4005, accuracy 0.5348]: 100%|██████████| 12/12 [00:15<00:00,  1.32s/it]
Train... [epoch 10/50, loss 1.2937, accuracy 0.5599]: 100%|██████████| 12/12 [00:16<00:00,  1.34s/it]
......
Train... [epoch 45/50, loss 0.7966, accuracy 0.7354]: 100%|██████████| 12/12 [00:15<00:00,  1.27s/it]
Train... [epoch 46/50, loss 0.8075, accuracy 0.7660]: 100%|██████████| 12/12 [00:15<00:00,  1.33s/it]
Train... [epoch 47/50, loss 0.8587, accuracy 0.7131]: 100%|██████████| 12/12 [00:15<00:00,  1.27s/it]
Train... [epoch 48/50, loss 0.7171, accuracy 0.7604]: 100%|██████████| 12/12 [00:16<00:00,  1.35s/it]
Train... [epoch 49/50, loss 0.9715, accuracy 0.7047]: 100%|██████████| 12/12 [00:15<00:00,  1.27s/it]
Train... [epoch 50/50, loss 0.7050, accuracy 0.7855]: 100%|██████████| 12/12 [00:15<00:00,  1.33s/it]

繪制訓(xùn)練曲線

plt.plot(range(len(train_loss_list)), train_loss_list)
plt.xlabel("epoch")
plt.ylabel("loss_val")
plt.show()

PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù),# PyTorch,MachineLearning,pytorch,人工智能,深度學(xué)習(xí),ResNet34,Residual文章來源地址http://www.zghlxwxcb.cn/news/detail-530182.html

到了這里,關(guān)于PyTorch示例——ResNet34模型和Fruits圖像數(shù)據(jù)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • 度學(xué)習(xí)pytorch實戰(zhàn)六:ResNet50網(wǎng)絡(luò)圖像分類篇自建花數(shù)據(jù)集圖像分類(5類)超詳細代碼

    度學(xué)習(xí)pytorch實戰(zhàn)六:ResNet50網(wǎng)絡(luò)圖像分類篇自建花數(shù)據(jù)集圖像分類(5類)超詳細代碼

    1.數(shù)據(jù)集簡介、訓(xùn)練集與測試集劃分 2.模型相關(guān)知識 3.model.py——定義ResNet50網(wǎng)絡(luò)模型 4.train.py——加載數(shù)據(jù)集并訓(xùn)練,訓(xùn)練集計算損失值loss,測試集計算accuracy,保存訓(xùn)練好的網(wǎng)絡(luò)參數(shù) 5.predict.py——利用訓(xùn)練好的網(wǎng)絡(luò)參數(shù)后,用自己找的圖像進行分類測試 1.自建數(shù)據(jù)文件夾

    2024年02月09日
    瀏覽(21)
  • 人工智能(Pytorch)搭建模型6-使用Pytorch搭建卷積神經(jīng)網(wǎng)絡(luò)ResNet模型

    人工智能(Pytorch)搭建模型6-使用Pytorch搭建卷積神經(jīng)網(wǎng)絡(luò)ResNet模型

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(Pytorch)搭建模型6-使用Pytorch搭建卷積神經(jīng)網(wǎng)絡(luò)ResNet模型,在本文中,我們將學(xué)習(xí)如何使用PyTorch搭建卷積神經(jīng)網(wǎng)絡(luò)ResNet模型,并在生成的假數(shù)據(jù)上進行訓(xùn)練和測試。本文將涵蓋這些內(nèi)容:ResNet模型簡介、ResNet模型結(jié)構(gòu)、生成假

    2024年02月06日
    瀏覽(103)
  • 深度學(xué)習(xí)圖像分類實戰(zhàn)——pytorch搭建卷積神經(jīng)網(wǎng)絡(luò)(AlexNet, LeNet, ResNet50)進行場景圖像分類(詳細)

    深度學(xué)習(xí)圖像分類實戰(zhàn)——pytorch搭建卷積神經(jīng)網(wǎng)絡(luò)(AlexNet, LeNet, ResNet50)進行場景圖像分類(詳細)

    目錄 1??一、實驗過程 1.1??實驗?zāi)康?1.2??實驗簡介 1.3??數(shù)據(jù)集的介紹 1.4??一、LeNet5網(wǎng)絡(luò)模型 1.5??二、AlexNet網(wǎng)絡(luò)模型 1.6??三、ResNet50(殘差網(wǎng)絡(luò))網(wǎng)絡(luò)模型 ?二、實驗代碼 導(dǎo)入實驗所需要的庫 ?參數(shù)配置 數(shù)據(jù)預(yù)處理 重新DataSet 加載數(shù)據(jù)轉(zhuǎn)為DataLoader函數(shù) 可視化一批訓(xùn)練

    2024年02月05日
    瀏覽(28)
  • Pytorch遷移學(xué)習(xí)使用Resnet50進行模型訓(xùn)練預(yù)測貓狗二分類

    Pytorch遷移學(xué)習(xí)使用Resnet50進行模型訓(xùn)練預(yù)測貓狗二分類

    目錄 ? 1.ResNet殘差網(wǎng)絡(luò) 1.1 ResNet定義 ?1.2 ResNet 幾種網(wǎng)絡(luò)配置 ?1.3 ResNet50網(wǎng)絡(luò)結(jié)構(gòu) 1.3.1 前幾層卷積和池化 1.3.2?殘差塊:構(gòu)建深度殘差網(wǎng)絡(luò) 1.3.3?ResNet主體:堆疊多個殘差塊 1.4 遷移學(xué)習(xí)貓狗二分類實戰(zhàn) 1.4.1 遷移學(xué)習(xí) 1.4.2 模型訓(xùn)練 1.4.3 模型預(yù)測 ? 深度學(xué)習(xí)在圖像分類、目標(biāo)檢

    2024年02月16日
    瀏覽(91)
  • 【超詳細小白必懂】Pytorch 直接加載ResNet50模型和參數(shù)實現(xiàn)遷移學(xué)習(xí)

    【超詳細小白必懂】Pytorch 直接加載ResNet50模型和參數(shù)實現(xiàn)遷移學(xué)習(xí)

    Torchvision.models包里面包含了常見的各種基礎(chǔ)模型架構(gòu),主要包括以下幾種:(我們以ResNet50模型作為此次演示的例子) AlexNet VGG ResNet SqueezeNet DenseNet Inception v3 GoogLeNet ShuffleNet v2 MobileNet v2 ResNeXt Wide ResNet MNASNet 首先加載ResNet50模型,如果如果需要加載模型本身的參數(shù),需要使用

    2024年02月16日
    瀏覽(20)
  • 深度學(xué)習(xí)(16)--基于經(jīng)典網(wǎng)絡(luò)架構(gòu)resnet訓(xùn)練圖像分類模型

    深度學(xué)習(xí)(16)--基于經(jīng)典網(wǎng)絡(luò)架構(gòu)resnet訓(xùn)練圖像分類模型

    目錄 一.項目介紹 二.項目流程詳解 2.1.引入所需的工具包 2.2.數(shù)據(jù)讀取和預(yù)處理 2.3.加載resnet152模型 2.4.初始化模型 2.5.設(shè)置需要更新的參數(shù) 2.6.訓(xùn)練模塊設(shè)置 2.7.再次訓(xùn)練所有層 2.8.測試網(wǎng)絡(luò)效果 三.完整代碼 使用PyTorch工具包調(diào)用經(jīng)典網(wǎng)絡(luò)架構(gòu)resnet訓(xùn)練圖像分類模型,用于分辨

    2024年02月20日
    瀏覽(25)
  • 基于ResNet34的花朵分類

    基于ResNet34的花朵分類

    新建一個項目文件夾ResNet,并在里面建立data_set文件夾用來保存數(shù)據(jù)集,在data_set文件夾下創(chuàng)建新文件夾\\\"flower_data\\\",點擊鏈接下載花分類數(shù)據(jù)集https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz,會下載一個壓縮包,將它解壓到flower_data文件夾下,執(zhí)行\(zhòng)\\"split_d

    2024年02月07日
    瀏覽(21)
  • 【AI】《動手學(xué)-深度學(xué)習(xí)-PyTorch版》筆記(十九):卷積神經(jīng)網(wǎng)絡(luò)模型(GoogLeNet、ResNet、DenseNet)

    【AI】《動手學(xué)-深度學(xué)習(xí)-PyTorch版》筆記(十九):卷積神經(jīng)網(wǎng)絡(luò)模型(GoogLeNet、ResNet、DenseNet)

    發(fā)布時間:2014年 GoogLeNet的貢獻是如何選擇合適大小的卷積核,并將不同大小的卷積核組合使用。 之前介紹的網(wǎng)絡(luò)結(jié)構(gòu)都是串行的,GoogLeNet使用并行的網(wǎng)絡(luò)塊,稱為“Inception塊” “Inception塊”前后進化了四次,論文鏈接: [1]https://arxiv.org/pdf/1409.4842.pdf [2]https://arxiv.org/pdf/150

    2024年02月12日
    瀏覽(110)
  • 混淆矩陣Confusion Matrix(resnet34 基于 CIFAR10)

    混淆矩陣Confusion Matrix(resnet34 基于 CIFAR10)

    目錄 1.?Confusion Matrix 2. 其他的性能指標(biāo) 3. example 4. 代碼實現(xiàn)混淆矩陣 5.? 測試,計算混淆矩陣 6. show 7. 代碼 混淆矩陣可以將真實標(biāo)簽和預(yù)測標(biāo)簽的結(jié)果以矩陣的形式表示出來,相比于之前計算的正確率acc更加的直觀。 如下,是花分類的混淆矩陣: 之前計算的acc = 預(yù)測正確的

    2024年02月01日
    瀏覽(20)
  • 【神經(jīng)網(wǎng)絡(luò)】(10) Resnet18、34 殘差網(wǎng)絡(luò)復(fù)現(xiàn),附python完整代碼

    【神經(jīng)網(wǎng)絡(luò)】(10) Resnet18、34 殘差網(wǎng)絡(luò)復(fù)現(xiàn),附python完整代碼

    各位同學(xué)好,今天和大家分享一下 TensorFlow 深度學(xué)習(xí) 中如何搭載 Resnet18 和 Resnet34 殘差神經(jīng)網(wǎng)絡(luò),殘差網(wǎng)絡(luò) 利用 shotcut 的方法成功解決了網(wǎng)絡(luò)退化的問題 ,在訓(xùn)練集和校驗集上,都證明了的更深的網(wǎng)絡(luò)錯誤率越小。 論文中給出的具體的網(wǎng)絡(luò)結(jié)構(gòu)如下: Resnet50 網(wǎng)絡(luò)結(jié)構(gòu) 我已

    2023年04月08日
    瀏覽(21)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包