国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

使用pytorch處理自己的數(shù)據(jù)集

這篇具有很好參考價(jià)值的文章主要介紹了使用pytorch處理自己的數(shù)據(jù)集。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

目錄

1 返回本地文件中的數(shù)據(jù)集

2 根據(jù)當(dāng)前已有的數(shù)據(jù)集創(chuàng)建每一個(gè)樣本數(shù)據(jù)對(duì)應(yīng)的標(biāo)簽

3 tensorboard的使用

4 transforms處理數(shù)據(jù)

tranfroms.Totensor的使用

transforms.Normalize的使用

transforms.Resize的使用

transforms.Compose使用

5 dataset_transforms使用


1 返回本地文件中的數(shù)據(jù)集

在這個(gè)操作中,當(dāng)前數(shù)據(jù)集的上一級(jí)目錄就是當(dāng)前所有同一數(shù)據(jù)的label

import os
from torch.utils.data import Dataset
from PIL import Image

class MyDataset(Dataset):
    def __init__(self, root_dir, label_dir):
        """
        :param root_dir: 根目錄文件
        :param label_dir: 分類(lèi)標(biāo)簽?zāi)夸?        """
        self.root_dir = root_dir
        self.label_dir = label_dir
        self.path = os.path.join(root_dir, label_dir)
        self.image_path_list = os.listdir(self.path)
    def __getitem__(self, idx):
        """
        :param idx: idx是自己文件夾下的每一個(gè)圖片索引
        :return: 返回每一個(gè)圖片對(duì)象和其對(duì)應(yīng)的標(biāo)簽,對(duì)于返回類(lèi)型可以直接調(diào)用image.show顯示或者用于后續(xù)圖像處理
        """
        img_name = self.image_path_list[idx]
        ever_image_path = os.path.join(self.root_dir, self.label_dir, img_name)
        image = Image.open(ever_image_path)
        label = self.label_dir
        return image, label
    def __len__(self):
        return len(self.image_path_list)

root_dir = 'G:\python_files\深度學(xué)習(xí)代碼庫(kù)\cats_and_dogs_small\\train'
label_dir = 'cats'
my_data = MyDataset(root_dir, label_dir)
first_pic, label = my_data[0]   # 自動(dòng)調(diào)用__getitem__(self, idx)
first_pic.show()
print("當(dāng)前圖片中動(dòng)物所屬label", label)

F:\Anaconda\envs\py38\python.exe G:/python_files/深度學(xué)習(xí)代碼庫(kù)/dataset/MyDataSet.py
當(dāng)前圖片中動(dòng)物所屬label cats

2 根據(jù)當(dāng)前已有的數(shù)據(jù)集創(chuàng)建每一個(gè)樣本數(shù)據(jù)對(duì)應(yīng)的標(biāo)簽


import os
from torch.utils.data import Dataset
from PIL import Image

class MyLabelData:
    def __init__(self, root_dir, target_dir, label_dir, label_name):
        """
        :param root_dir: 根目錄
        :param target_dir: 生成標(biāo)簽的目錄
        :param label_dir: 要生成為標(biāo)簽?zāi)夸浢Q(chēng)
        :param label_name: 生成的標(biāo)簽名稱(chēng)
        """
        self.root_dir = root_dir
        self.target_dir = target_dir
        self.label_dir = label_dir
        self.label_name = label_name
        self.image_name_list = os.listdir(os.path.join(root_dir, target_dir))
    def label(self):
        for name in self.image_name_list:
            file_name = name.split(".jpg", 1)[0]
            label_path = os.path.join(self.root_dir, self.label_dir)
            if not os.path.exists(label_path):
                os.makedirs(label_path)
            with open(os.path.join(label_path, '{}'.format(file_name)), 'w') as f:
                f.write(self.label_name)
                f.close()
root_dir = 'G:\python_files\深度學(xué)習(xí)代碼庫(kù)\cats_and_dogs_small\\train'
target_dir = 'cats'
label_dir = 'cats_label'
label_name = 'cat'
label = MyLabelData(root_dir, target_dir, label_dir, label_name)
label.label()

這樣上面的代碼中的訓(xùn)練集目錄下的每一個(gè)樣本都會(huì)在train的cats_label目錄下創(chuàng)建其對(duì)應(yīng)的分類(lèi)標(biāo)簽

使用pytorch處理自己的數(shù)據(jù)集,Pytorch,pytorch,人工智能,python

每一個(gè)標(biāo)簽中文件中都有一個(gè)cat字符串或者其他動(dòng)物的分類(lèi)名稱(chēng),以確定它到底是哪一個(gè)動(dòng)物

3 tensorboard的使用

# tensorboard --logdir=深度學(xué)習(xí)代碼庫(kù)/logs --port=2001
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('logs')
for i in range(100):
    writer.add_scalar('當(dāng)前的函數(shù)表達(dá)式y(tǒng)=3*x',i*3,i)
writer.close()
#-----------------------------------------------------------
import numpy as np
from PIL import Image
image_PIL = Image.open('G:\python_files\深度學(xué)習(xí)代碼庫(kù)\cats_and_dogs_small\\train\cats\cat.1.jpg')
image_numpy = np.array(image_PIL)
print(type(image_numpy))
print(image_numpy.shape)
writer.add_image('cat圖片', image_numpy,2, dataformats='HWC')

這里使用tensorboard的作用是為了更好的展示數(shù)據(jù),但是對(duì)于函數(shù)的使用,比如上面的add_image中的參數(shù),最好的方式是點(diǎn)擊源碼查看其對(duì)應(yīng)的參數(shù)類(lèi)型,然后根據(jù)實(shí)際需要將它所需的數(shù)據(jù)類(lèi)型丟給add_image就好,而在源碼中該函數(shù)的參數(shù)中所要求的圖片類(lèi)型必須是tensor類(lèi)型或者是numpy,所以想要使用tensorboard展示數(shù)據(jù)就首先必須使用numpy或者使用transforms.Totensor將其轉(zhuǎn)化為tensor,然后丟給add_image函數(shù)

還有一個(gè)需要注意的是,使用add_image函數(shù),圖片的tensor類(lèi)型或者numpy類(lèi)型必須和dataformats的默認(rèn)數(shù)據(jù)類(lèi)型一樣,否則根據(jù)圖片的數(shù)據(jù)類(lèi)型修改后面的額dataformatas就好

4 transforms處理數(shù)據(jù)

tranfroms.Totensor的使用
import numpy as np
from torchvision import transforms
from PIL import Image
tran = transforms.ToTensor()
PIL_image = Image.open('G:\python_files\深度學(xué)習(xí)代碼庫(kù)\\cats\cat\cat.11.jpg')
tensor_pic = tran(PIL_image)
print(tensor_pic)
print(tensor_pic.shape)
from torch.utils.tensorboard import SummaryWriter
write = SummaryWriter('logs')
write.add_image('Tensor_picture',tensor_pic)

tensor([[[0.9216, 0.9059, 0.8353, ?..., 0.2392, 0.2275, 0.2078],
? ? ? ? ?[0.9765, 0.9216, 0.8118, ?..., 0.2431, 0.2392, 0.2235],
? ? ? ? ?[0.9490, 0.8745, 0.7608, ?..., 0.2471, 0.2471, 0.2314],
? ? ? ? ?...,
? ? ? ? ?[0.3490, 0.4902, 0.6667, ?..., 0.7804, 0.7804, 0.7804],
? ? ? ? ?[0.3412, 0.4431, 0.5216, ?..., 0.7765, 0.7922, 0.7882],
? ? ? ? ?[0.3490, 0.4510, 0.5294, ?..., 0.7765, 0.7922, 0.7882]],

? ? ? ? [[0.9451, 0.9294, 0.8706, ?..., 0.2980, 0.2863, 0.2667],
? ? ? ? ?[1.0000, 0.9451, 0.8471, ?..., 0.3020, 0.2980, 0.2824],
? ? ? ? ?[0.9725, 0.8980, 0.7961, ?..., 0.2980, 0.2980, 0.2824],
? ? ? ? ?...,
? ? ? ? ?[0.3725, 0.5137, 0.6902, ?..., 0.8431, 0.8431, 0.8431],
? ? ? ? ?[0.3647, 0.4667, 0.5451, ?..., 0.8392, 0.8549, 0.8510],
? ? ? ? ?[0.3608, 0.4627, 0.5412, ?..., 0.8392, 0.8549, 0.8510]],

? ? ? ? [[0.9294, 0.9137, 0.8588, ?..., 0.2235, 0.2118, 0.1922],
? ? ? ? ?[0.9922, 0.9373, 0.8353, ?..., 0.2275, 0.2235, 0.2078],
? ? ? ? ?[0.9725, 0.8980, 0.7922, ?..., 0.2275, 0.2275, 0.2118],
? ? ? ? ?...,
? ? ? ? ?[0.4196, 0.5608, 0.7373, ?..., 0.9412, 0.9412, 0.9333],
? ? ? ? ?[0.4196, 0.5216, 0.6000, ?..., 0.9373, 0.9529, 0.9412],
? ? ? ? ?[0.4196, 0.5216, 0.6000, ?..., 0.9373, 0.9529, 0.9412]]])
torch.Size([3, 410, 431])

transforms.Normalize的使用
# 對(duì)應(yīng)三個(gè)通道,每一個(gè)通道一個(gè)平均值和方差
# output[channel] = (input[channel] - mean[channel]) / std[channel]
nor = transforms.Normalize([0.5, 0.5, 0.5],[10, 0.5, 0.5])
print(tensor_pic[0][0][0])
x_nor = nor(tensor_pic)
write.add_image('nor_picture:', x_nor)
print(tensor_pic[0][0][0])
write.close()

打開(kāi)源碼查看

def forward(self, tensor: Tensor) -> Tensor:
    """
    Args:
        tensor (Tensor): Tensor image to be normalized.

    Returns:
        Tensor: Normalized Tensor image.
    """
    return F.normalize(tensor, self.mean, self.std, self.inplace)

必須傳入的是tensor數(shù)據(jù)類(lèi)型

transforms.Resize的使用
size_tensor = transforms.Resize((512,512))
# 裁剪tensor
tensor_pic_size = size_tensor(tensor_pic)
# 裁剪Image
size_pic = transforms.Resize((512,512))
image_size = size_pic(PIL_image)
print(image_size)
write.add_image('tensor_pic_size',tensor_pic_size)
print(tensor_pic_size.shape)
np_image = np.array(image_size)
print('np_image.shape:', np_image.shape)
write.add_image('image_size', np_image, dataformats='HWC')

調(diào)用Resize的時(shí)候,需要傳入的數(shù)據(jù)類(lèi)型的要求,查看源碼如下

def forward(self, img):
    """
    Args:
        img (PIL Image or Tensor): Image to be scaled.

    Returns:
        PIL Image or Tensor: Rescaled image.
    """
    return F.resize(img, self.size, self.interpolation)

<PIL.Image.Image image mode=RGB size=512x512 at 0x1A72B1E7D00>
torch.Size([3, 512, 512])
np_image.shape: (512, 512, 3)

transforms.Compose使用
nor = transforms.Normalize([0.5, 0.5, 0.5],[10, 0.5, 0.5])
trans_resize_2 = transforms.Resize((64,64))
trans_to_tensor = transforms.ToTensor()
trans_compose = transforms.Compose([trans_resize_2, trans_to_tensor])
tensor_pic_compose = trans_compose(PIL_image)
write.add_image('tensor_pic_compose',tensor_pic_compose,dataformats='CHW')
class Compose:
    """Composes several transforms together. This transform does not support torchscript.
    Please, see the note below.

    Args:
        transforms (list of ``Transform`` objects): list of transforms to compose.

    Example:
        >>> transforms.Compose([
        >>>     transforms.CenterCrop(10),
        >>>     transforms.ToTensor(),
        >>> ])

    .. note::
        In order to script the transformations, please use ``torch.nn.Sequential`` as below.

        >>> transforms = torch.nn.Sequential(
        >>>     transforms.CenterCrop(10),
        >>>     transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
        >>> )
        >>> scripted_transforms = torch.jit.script(transforms)

        Make sure to use only scriptable transformations, i.e. that work with ``torch.Tensor``, does not require
        `lambda` functions or ``PIL.Image``.

    """

    def __init__(self, transforms):
        self.transforms = transforms

    def __call__(self, img):
        for t in self.transforms:
            img = t(img)
        return img

    def __repr__(self):
        format_string = self.__class__.__name__ + '('
        for t in self.transforms:
            format_string += '\n'
            format_string += '    {0}'.format(t)
        format_string += '\n)'
        return format_string

5 dataset_transforms使用

from torch.utils.data import DataLoader
from torchvision import  transforms
import torchvision
data_transform = transforms.Compose([transforms.ToTensor()])
train_data = torchvision.datasets.CIFAR10('./data', train=True, download=True)
test_data = torchvision.datasets.CIFAR10('./data', train=False, download=True)
print("train_data", train_data)
# 原始的數(shù)據(jù)集中每一條數(shù)據(jù)中包含以一張圖片和該圖片所屬的類(lèi)別
print("train_data[0]", train_data[0])
print("train_data.classes", train_data.classes)
image, label = train_data[0]
print("label ",label)
image.show()
print("train_data.classes[label]", train_data.classes[label])

train_data Dataset CIFAR10
? ? Number of datapoints: 50000
? ? Root location: ./data
? ? Split: Train
train_data[0] (<PIL.Image.Image image mode=RGB size=32x32 at 0x144ED58D970>, 6)
train_data.classes ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
label ?6
train_data.classes[label] frog

#%%
from torchvision import transforms
import torchvision
# 將整個(gè)數(shù)據(jù)集轉(zhuǎn)化為tensor類(lèi)型
data_transform1 = transforms.Compose([transforms.ToTensor()])
train_data = torchvision.datasets.CIFAR10('./data', train=True, transform=data_transform1, download=True)
test_data1 = torchvision.datasets.CIFAR10('./data', train=False, transform=data_transform1, download=True)
from torch.utils.tensorboard import SummaryWriter
write = SummaryWriter('batch_picture')
for i in range(10):
    tensor_pic, label = train_data[i]  # 經(jīng)過(guò)前面的transforms成了tensor
    print(tensor_pic.shape)
    write.add_image('batch_picture', tensor_pic, i)
write.close()

Files already downloaded and verified
Files already downloaded and verified
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])
torch.Size([3, 32, 32])

def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'):
    """Add image data to summary.

    Note that this requires the ``pillow`` package.

    Args:
        tag (string): Data identifier
        img_tensor (torch.Tensor, numpy.array, or string/blobname): Image data
        global_step (int): Global step value to record
        walltime (float): Optional override default walltime (time.time())
          seconds after epoch of event
    Shape:
        img_tensor: Default is :math:`(3, H, W)`. You can use ``torchvision.utils.make_grid()`` to
        convert a batch of tensor into 3xHxW format or call ``add_images`` and let us do the job.
        Tensor with :math:`(1, H, W)`, :math:`(H, W)`, :math:`(H, W, 3)` is also suitable as long as
        corresponding ``dataformats`` argument is passed, e.g. ``CHW``, ``HWC``, ``HW``.

    Examples::

        from torch.utils.tensorboard import SummaryWriter
        import numpy as np
        img = np.zeros((3, 100, 100))
        img[0] = np.arange(0, 10000).reshape(100, 100) / 10000
        img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000

        img_HWC = np.zeros((100, 100, 3))
        img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000
        img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000

        writer = SummaryWriter()
        writer.add_image('my_image', img, 0)

        # If you have non-default dimension setting, set the dataformats argument.
        writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')
        writer.close()

    Expected result:

    .. image:: _static/img/tensorboard/add_image.png
       :scale: 50 %

    """
    torch._C._log_api_usage_once("tensorboard.logging.add_image")
    if self._check_caffe2_blob(img_tensor):
        from caffe2.python import workspace
        img_tensor = workspace.FetchBlob(img_tensor)
    self._get_file_writer().add_summary(
        image(tag, img_tensor, dataformats=dataformats), global_step, walltime)
from torchvision import transforms
import torchvision
# 將整個(gè)數(shù)據(jù)集轉(zhuǎn)化為tensor類(lèi)型
data_transform = transforms.Compose([transforms.ToTensor()])
train_data = torchvision.datasets.CIFAR10('./data', train=True, transform=data_transform, download=True)
test_data = torchvision.datasets.CIFAR10('./data', train=False, transform=data_transform, download=True)
# dataLoad會(huì)將原始數(shù)據(jù)中一個(gè)batch中的圖片和圖片的Label分別放在一起,形成對(duì)應(yīng)
train_data_load = DataLoader(dataset=train_data, shuffle=True, batch_size=64,)
from torch.utils.tensorboard import SummaryWriter
write = SummaryWriter('dataLoad')
# 遍歷整個(gè)load,一次遍歷的圖片是64個(gè)
for batch_id, data in enumerate(train_data_load):
    # 經(jīng)過(guò)DataLoda之后,每一個(gè)批次返回一批圖片和該圖片對(duì)應(yīng)的標(biāo)簽類(lèi)別
    print('data',data)
    batch_image, batch_label = data
    print("batch_id",batch_id)
    print("image.shape", batch_image.shape)
    print("label.shape", batch_label.shape)
    write.add_images('batch_load_picture', batch_image, batch_id, dataformats='NCHW')
write.close()
其中一個(gè)批次的輸出結(jié)果展示
batch_id 646
image.shape torch.Size([64, 3, 32, 32])
label.shape torch.Size([64])
data [tensor([[[[0.2510, 0.3804, 0.5176,  ..., 0.5529, 0.5451, 0.2980],
          [0.2706, 0.6000, 0.6667,  ..., 0.5686, 0.3961, 0.1176],
          [0.2745, 0.6627, 0.6980,  ..., 0.3961, 0.1608, 0.0824],
          ...,
          [0.6863, 0.6824, 0.5333,  ..., 0.2941, 0.4863, 0.5059],
          [0.5804, 0.6784, 0.4902,  ..., 0.1451, 0.2824, 0.3451],
          [0.4353, 0.4353, 0.5098,  ..., 0.1373, 0.1529, 0.2902]],

         [[0.3020, 0.4549, 0.6078,  ..., 0.6627, 0.6353, 0.3608],
          [0.3451, 0.6980, 0.7765,  ..., 0.6745, 0.4706, 0.1647],
          [0.3490, 0.7529, 0.8039,  ..., 0.4667, 0.2000, 0.1137],
          ...,
          [0.8196, 0.8157, 0.6157,  ..., 0.3608, 0.5529, 0.5804],
          [0.7137, 0.8039, 0.5686,  ..., 0.1922, 0.3373, 0.4078],
          [0.5412, 0.5333, 0.5765,  ..., 0.1765, 0.2000, 0.3490]],

         [[0.3098, 0.5490, 0.7412,  ..., 0.8314, 0.7373, 0.3765],
          [0.3765, 0.8392, 0.9569,  ..., 0.7686, 0.4941, 0.1216],
          [0.3843, 0.9176, 1.0000,  ..., 0.4627, 0.1490, 0.0588],
          ...,
          [0.9843, 0.9922, 0.7373,  ..., 0.3882, 0.6353, 0.7255],
          [0.8039, 0.9373, 0.6745,  ..., 0.1804, 0.3647, 0.4941],
          [0.6471, 0.6549, 0.6980,  ..., 0.1569, 0.2000, 0.3961]]],


        [[[0.9608, 0.9490, 0.9529,  ..., 0.8314, 0.8196, 0.8235],
          [0.9255, 0.9216, 0.9333,  ..., 0.8275, 0.8196, 0.8235],
          [0.9137, 0.9137, 0.9294,  ..., 0.8392, 0.8314, 0.8353],
          ...,
          [0.4118, 0.4353, 0.4431,  ..., 0.4157, 0.4431, 0.4275],
          [0.4667, 0.4667, 0.4627,  ..., 0.3961, 0.3804, 0.3882],
          [0.4392, 0.4235, 0.4235,  ..., 0.5490, 0.4471, 0.4706]],

         [[0.9647, 0.9529, 0.9529,  ..., 0.8745, 0.8667, 0.8667],
          [0.9294, 0.9255, 0.9333,  ..., 0.8627, 0.8549, 0.8549],
          [0.9137, 0.9176, 0.9294,  ..., 0.8627, 0.8588, 0.8549],
          ...,
          [0.4196, 0.4392, 0.4471,  ..., 0.4314, 0.4627, 0.4510],
          [0.4745, 0.4745, 0.4706,  ..., 0.4078, 0.4039, 0.4118],
          [0.4471, 0.4314, 0.4314,  ..., 0.5608, 0.4667, 0.4863]],

         [[0.9765, 0.9686, 0.9647,  ..., 0.9412, 0.9373, 0.9569],
          [0.9451, 0.9412, 0.9529,  ..., 0.9216, 0.9216, 0.9373],
          [0.9451, 0.9451, 0.9569,  ..., 0.9176, 0.9176, 0.9333],
          ...,
          [0.4078, 0.4314, 0.4353,  ..., 0.4353, 0.4706, 0.4588],
          [0.4627, 0.4627, 0.4588,  ..., 0.4118, 0.4118, 0.4157],
          [0.4353, 0.4196, 0.4196,  ..., 0.5569, 0.4627, 0.4863]]],


        [[[0.9569, 0.9569, 0.9647,  ..., 0.8510, 0.8353, 0.8235],
          [0.9569, 0.9569, 0.9608,  ..., 0.8627, 0.8431, 0.8392],
          [0.9804, 0.9725, 0.9725,  ..., 0.8745, 0.8627, 0.8549],
          ...,
          [0.3725, 0.3882, 0.3922,  ..., 0.3647, 0.3725, 0.3686],
          [0.3882, 0.4000, 0.4157,  ..., 0.3882, 0.3804, 0.3608],
          [0.3882, 0.4000, 0.4118,  ..., 0.3725, 0.3608, 0.3490]],

         [[0.9608, 0.9608, 0.9686,  ..., 0.8706, 0.8549, 0.8392],
          [0.9608, 0.9608, 0.9686,  ..., 0.8784, 0.8549, 0.8510],
          [0.9843, 0.9765, 0.9804,  ..., 0.8863, 0.8745, 0.8627],
          ...,
          [0.3804, 0.3922, 0.3961,  ..., 0.3255, 0.3529, 0.3686],
          [0.3961, 0.4078, 0.4235,  ..., 0.3647, 0.3686, 0.3647],
          [0.3961, 0.4078, 0.4196,  ..., 0.3843, 0.3686, 0.3569]],

         [[0.9843, 0.9765, 0.9804,  ..., 0.9294, 0.9176, 0.9137],
          [0.9804, 0.9686, 0.9725,  ..., 0.9216, 0.9059, 0.9098],
          [0.9961, 0.9804, 0.9765,  ..., 0.9137, 0.9098, 0.9098],
          ...,
          [0.3725, 0.3882, 0.3922,  ..., 0.2902, 0.3255, 0.3686],
          [0.3922, 0.4039, 0.4196,  ..., 0.3412, 0.3490, 0.3608],
          [0.3922, 0.4039, 0.4157,  ..., 0.3843, 0.3686, 0.3529]]],


        ...,


        [[[0.8902, 0.8863, 0.8824,  ..., 0.8314, 0.8392, 0.8353],
          [0.8902, 0.8863, 0.8863,  ..., 0.8353, 0.8431, 0.8392],
          [0.8902, 0.8863, 0.8902,  ..., 0.8392, 0.8431, 0.8431],
          ...,
          [0.9569, 0.9529, 0.9569,  ..., 0.5765, 0.5843, 0.5961],
          [0.9686, 0.9647, 0.9608,  ..., 0.9412, 0.9255, 0.9255],
          [0.9804, 0.9765, 0.9725,  ..., 0.9255, 0.9176, 0.9176]],

         [[0.9176, 0.9137, 0.9098,  ..., 0.8667, 0.8745, 0.8706],
          [0.9176, 0.9137, 0.9137,  ..., 0.8706, 0.8784, 0.8745],
          [0.9176, 0.9137, 0.9176,  ..., 0.8784, 0.8824, 0.8784],
          ...,
          [0.9608, 0.9569, 0.9608,  ..., 0.6392, 0.6667, 0.6706],
          [0.9765, 0.9725, 0.9647,  ..., 0.9608, 0.9765, 0.9725],
          [0.9882, 0.9843, 0.9804,  ..., 0.9255, 0.9451, 0.9490]],

         [[0.9412, 0.9373, 0.9333,  ..., 0.9255, 0.9333, 0.9294],
          [0.9412, 0.9373, 0.9373,  ..., 0.9294, 0.9373, 0.9333],
          [0.9412, 0.9373, 0.9412,  ..., 0.9294, 0.9333, 0.9333],
          ...,
          [0.9686, 0.9647, 0.9686,  ..., 0.6667, 0.6824, 0.6863],
          [0.9725, 0.9686, 0.9647,  ..., 0.9804, 0.9804, 0.9804],
          [0.9843, 0.9804, 0.9765,  ..., 0.9373, 0.9451, 0.9490]]],


        [[[0.1725, 0.1725, 0.1804,  ..., 0.1255, 0.1255, 0.1255],
          [0.1922, 0.1882, 0.1843,  ..., 0.1333, 0.1373, 0.1333],
          [0.1961, 0.1922, 0.1882,  ..., 0.1412, 0.1412, 0.1333],
          ...,
          [0.4471, 0.4902, 0.5137,  ..., 0.5647, 0.5725, 0.5961],
          [0.4431, 0.4706, 0.4824,  ..., 0.5608, 0.5529, 0.5569],
          [0.4275, 0.4431, 0.4392,  ..., 0.6078, 0.5608, 0.5176]],

         [[0.0980, 0.0980, 0.1059,  ..., 0.0353, 0.0353, 0.0392],
          [0.1137, 0.1137, 0.1098,  ..., 0.0431, 0.0471, 0.0471],
          [0.1216, 0.1176, 0.1137,  ..., 0.0549, 0.0549, 0.0549],
          ...,
          [0.2471, 0.2824, 0.3529,  ..., 0.5490, 0.5451, 0.5608],
          [0.2510, 0.2980, 0.3765,  ..., 0.5569, 0.5294, 0.5255],
          [0.2471, 0.3059, 0.3765,  ..., 0.6078, 0.5451, 0.4902]],

         [[0.0431, 0.0431, 0.0510,  ..., 0.0118, 0.0118, 0.0118],
          [0.0588, 0.0588, 0.0549,  ..., 0.0118, 0.0118, 0.0118],
          [0.0667, 0.0627, 0.0588,  ..., 0.0118, 0.0118, 0.0118],
          ...,
          [0.2431, 0.2745, 0.3176,  ..., 0.5373, 0.5608, 0.5804],
          [0.2510, 0.2824, 0.3294,  ..., 0.5490, 0.5412, 0.5412],
          [0.2510, 0.2863, 0.3216,  ..., 0.6000, 0.5529, 0.4980]]],


        [[[0.6353, 0.6314, 0.6314,  ..., 0.6157, 0.6157, 0.6157],
          [0.6353, 0.6314, 0.6314,  ..., 0.6157, 0.6157, 0.6157],
          [0.6353, 0.6314, 0.6314,  ..., 0.6157, 0.6157, 0.6157],
          ...,
          [0.6471, 0.6431, 0.6431,  ..., 0.6392, 0.6392, 0.6392],
          [0.6471, 0.6431, 0.6431,  ..., 0.6392, 0.6392, 0.6392],
          [0.6471, 0.6431, 0.6431,  ..., 0.6392, 0.6392, 0.6392]],

         [[0.7804, 0.7765, 0.7765,  ..., 0.7725, 0.7725, 0.7686],
          [0.7804, 0.7765, 0.7765,  ..., 0.7725, 0.7725, 0.7686],
          [0.7804, 0.7765, 0.7765,  ..., 0.7725, 0.7725, 0.7686],
          ...,
          [0.7922, 0.7882, 0.7882,  ..., 0.7843, 0.7843, 0.7843],
          [0.7922, 0.7882, 0.7882,  ..., 0.7843, 0.7843, 0.7843],
          [0.7922, 0.7882, 0.7882,  ..., 0.7843, 0.7843, 0.7843]],

         [[0.9882, 0.9804, 0.9843,  ..., 0.9765, 0.9765, 0.9765],
          [0.9882, 0.9804, 0.9843,  ..., 0.9765, 0.9765, 0.9765],
          [0.9882, 0.9804, 0.9843,  ..., 0.9765, 0.9765, 0.9765],
          ...,
          [0.9961, 0.9882, 0.9922,  ..., 0.9882, 0.9882, 0.9882],
          [0.9961, 0.9882, 0.9922,  ..., 0.9882, 0.9882, 0.9882],
          [0.9961, 0.9882, 0.9922,  ..., 0.9882, 0.9882, 0.9882]]]]), tensor([2, 8, 9, 6, 9, 3, 8, 3, 7, 7, 7, 3, 9, 2, 3, 1, 0, 1, 9, 6, 7, 6, 7, 9,
        1, 1, 8, 9, 2, 7, 5, 0, 1, 5, 9, 4, 2, 5, 7, 6, 3, 2, 2, 9, 4, 2, 1, 1,
        9, 5, 2, 5, 0, 8, 1, 7, 3, 5, 8, 0, 5, 0, 5, 0])]

使用add_images對(duì)所有批次的數(shù)據(jù)進(jìn)行展示

def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'):
    """Add batched image data to summary.

    Note that this requires the ``pillow`` package.

    Args:
        tag (string): Data identifier
        img_tensor (torch.Tensor, numpy.array, or string/blobname): Image data
        global_step (int): Global step value to record
        walltime (float): Optional override default walltime (time.time())
          seconds after epoch of event
        dataformats (string): Image data format specification of the form
          NCHW, NHWC, CHW, HWC, HW, WH, etc.
    Shape:
        img_tensor: Default is :math:`(N, 3, H, W)`. If ``dataformats`` is specified, other shape will be
        accepted. e.g. NCHW or NHWC.

    Examples::

        from torch.utils.tensorboard import SummaryWriter
        import numpy as np

        img_batch = np.zeros((16, 3, 100, 100))
        for i in range(16):
            img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i
            img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i

        writer = SummaryWriter()
        writer.add_images('my_image_batch', img_batch, 0)
        writer.close()

    Expected result:

    .. image:: _static/img/tensorboard/add_images.png
       :scale: 30 %

    """
    torch._C._log_api_usage_once("tensorboard.logging.add_images")
    if self._check_caffe2_blob(img_tensor):
        from caffe2.python import workspace
        img_tensor = workspace.FetchBlob(img_tensor)
    self._get_file_writer().add_summary(
        image(tag, img_tensor, dataformats=dataformats), global_step, walltime)

在使用add_images時(shí)要注意默認(rèn)的通道數(shù)是3,如果經(jīng)過(guò)卷積層以后的圖片通道數(shù)大于3,那么是無(wú)法使用該函數(shù)進(jìn)行顯示的,會(huì)顯示斷言錯(cuò)誤的信息,所以此時(shí)要使用torch.reshape將通道數(shù)變?yōu)?,然后可以正常調(diào)用

對(duì)于還未涉及的方法也是這樣,查看其對(duì)應(yīng)的參數(shù)類(lèi)型(使用crtl+p,或者直接crtl+鼠標(biāo)點(diǎn)擊相應(yīng)的函數(shù)查看源碼),將所需要的參數(shù)類(lèi)型丟給它使用就好文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-740291.html

到了這里,關(guān)于使用pytorch處理自己的數(shù)據(jù)集的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 人工智能(pytorch)搭建模型17-pytorch搭建ReitnNet模型,加載數(shù)據(jù)進(jìn)行模型訓(xùn)練與預(yù)測(cè)

    人工智能(pytorch)搭建模型17-pytorch搭建ReitnNet模型,加載數(shù)據(jù)進(jìn)行模型訓(xùn)練與預(yù)測(cè)

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(pytorch)搭建模型17-pytorch搭建ReitnNet模型,加載數(shù)據(jù)進(jìn)行模型訓(xùn)練與預(yù)測(cè),RetinaNet 是一種用于目標(biāo)檢測(cè)任務(wù)的深度學(xué)習(xí)模型,旨在解決目標(biāo)檢測(cè)中存在的困難樣本和不平衡類(lèi)別問(wèn)題。它是基于單階段檢測(cè)器的一種改進(jìn)方法,通

    2024年02月15日
    瀏覽(173)
  • 人工智能(pytorch)搭建模型12-pytorch搭建BiGRU模型,利用正態(tài)分布數(shù)據(jù)訓(xùn)練該模型

    人工智能(pytorch)搭建模型12-pytorch搭建BiGRU模型,利用正態(tài)分布數(shù)據(jù)訓(xùn)練該模型

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(pytorch)搭建模型12-pytorch搭建BiGRU模型,利用正態(tài)分布數(shù)據(jù)訓(xùn)練該模型。本文將介紹一種基于PyTorch的BiGRU模型應(yīng)用項(xiàng)目。我們將首先解釋BiGRU模型的原理,然后使用PyTorch搭建模型,并提供模型代碼和數(shù)據(jù)樣例。接下來(lái),我們將

    2024年02月09日
    瀏覽(91)
  • 人工智能(Pytorch)搭建GRU網(wǎng)絡(luò),構(gòu)造數(shù)據(jù)實(shí)現(xiàn)訓(xùn)練過(guò)程與評(píng)估

    人工智能(Pytorch)搭建GRU網(wǎng)絡(luò),構(gòu)造數(shù)據(jù)實(shí)現(xiàn)訓(xùn)練過(guò)程與評(píng)估

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(Pytorch)搭建模型3-GRU網(wǎng)絡(luò)的構(gòu)建,構(gòu)造數(shù)據(jù)實(shí)現(xiàn)訓(xùn)練過(guò)程與評(píng)估,讓大家了解整個(gè)訓(xùn)練的過(guò)程。 GRU(Gated Recurrent Unit,門(mén)控循環(huán)單元)是一種循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)的變體,用于處理序列數(shù)據(jù)。對(duì)于每個(gè)時(shí)刻,GRU模型都根據(jù)當(dāng)前

    2023年04月09日
    瀏覽(101)
  • 【人工智能概論】 PyTorch可視化工具Tensorboard安裝與簡(jiǎn)單使用

    Tensorboard原本是Tensorflow的可視化工具,但自PyTorch1.2.0版本開(kāi)始,PyTorch正式內(nèi)置Tensorboard的支持,盡管如此仍需手動(dòng)安裝Tensorboard。否則會(huì)報(bào)錯(cuò)。 ModuleNotFoundError: No module named ‘tensorboard’ 進(jìn)入相應(yīng)虛擬環(huán)境后,輸入以下指令即可安裝。 輸入以下指令,不報(bào)錯(cuò)即說(shuō)明安裝成功。

    2023年04月24日
    瀏覽(22)
  • 人工智能(pytorch)搭建模型18-含有注意力機(jī)制的CoAtNet模型的搭建,加載數(shù)據(jù)進(jìn)行模型訓(xùn)練

    大家好,我是微學(xué)AI,今天我給大家介紹一下人工智能(pytorch)搭建模型18-pytorch搭建有注意力機(jī)制的CoAtNet模型模型,加載數(shù)據(jù)進(jìn)行模型訓(xùn)練。本文我們將詳細(xì)介紹CoAtNet模型的原理,并通過(guò)一個(gè)基于PyTorch框架的實(shí)例,展示如何加載數(shù)據(jù),訓(xùn)練CoAtNet模型,從操作上理解該模型。

    2024年02月16日
    瀏覽(34)
  • 人工智能(pytorch)搭建模型9-pytorch搭建一個(gè)ELMo模型,實(shí)現(xiàn)訓(xùn)練過(guò)程

    人工智能(pytorch)搭建模型9-pytorch搭建一個(gè)ELMo模型,實(shí)現(xiàn)訓(xùn)練過(guò)程

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(pytorch)搭建模型9-pytorch搭建一個(gè)ELMo模型,實(shí)現(xiàn)訓(xùn)練過(guò)程,本文將介紹如何使用PyTorch搭建ELMo模型,包括ELMo模型的原理、數(shù)據(jù)樣例、模型訓(xùn)練、損失值和準(zhǔn)確率的打印以及預(yù)測(cè)。文章將提供完整的代碼實(shí)現(xiàn)。 ELMo模型簡(jiǎn)介 數(shù)據(jù)

    2024年02月07日
    瀏覽(106)
  • 人工智能學(xué)習(xí)07--pytorch15(前接pytorch10)--目標(biāo)檢測(cè):FPN結(jié)構(gòu)詳解

    人工智能學(xué)習(xí)07--pytorch15(前接pytorch10)--目標(biāo)檢測(cè):FPN結(jié)構(gòu)詳解

    backbone:骨干網(wǎng)絡(luò),例如cnn的一系列。(特征提取) (a)特征圖像金字塔 檢測(cè)不同尺寸目標(biāo)。 首先將圖片縮放到不同尺度,針對(duì)每個(gè)尺度圖片都一次通過(guò)算法進(jìn)行預(yù)測(cè)。 但是這樣一來(lái),生成多少個(gè)尺度就要預(yù)測(cè)多少次,訓(xùn)練效率很低。 (b)單一特征圖 faster rcnn所采用的一種方式

    2023年04月12日
    瀏覽(237)
  • 人工智能學(xué)習(xí)07--pytorch14--ResNet網(wǎng)絡(luò)/BN/遷移學(xué)習(xí)詳解+pytorch搭建

    人工智能學(xué)習(xí)07--pytorch14--ResNet網(wǎng)絡(luò)/BN/遷移學(xué)習(xí)詳解+pytorch搭建

    亮點(diǎn):網(wǎng)絡(luò)結(jié)構(gòu)特別深 (突變點(diǎn)是因?yàn)閷W(xué)習(xí)率除0.1?) 梯度消失 :假設(shè)每一層的誤差梯度是一個(gè)小于1的數(shù),則在反向傳播過(guò)程中,每向前傳播一層,都要乘以一個(gè)小于1的誤差梯度。當(dāng)網(wǎng)絡(luò)越來(lái)越深的時(shí)候,相乘的這些小于1的系數(shù)越多,就越趨近于0,這樣梯度就會(huì)越來(lái)越小

    2023年04月11日
    瀏覽(783)
  • 人工智能(pytorch)搭建模型10-pytorch搭建脈沖神經(jīng)網(wǎng)絡(luò)(SNN)實(shí)現(xiàn)及應(yīng)用

    人工智能(pytorch)搭建模型10-pytorch搭建脈沖神經(jīng)網(wǎng)絡(luò)(SNN)實(shí)現(xiàn)及應(yīng)用

    大家好,我是微學(xué)AI,今天給大家介紹一下人工智能(pytorch)搭建模型10-pytorch搭建脈沖神經(jīng)網(wǎng)絡(luò)(SNN)實(shí)現(xiàn)及應(yīng)用,脈沖神經(jīng)網(wǎng)絡(luò)(SNN)是一種基于生物神經(jīng)系統(tǒng)的神經(jīng)網(wǎng)絡(luò)模型,它通過(guò)模擬神經(jīng)元之間的電信號(hào)傳遞來(lái)實(shí)現(xiàn)信息處理。與傳統(tǒng)的人工神經(jīng)網(wǎng)絡(luò)(ANN)不同,SNN 中的

    2024年02月08日
    瀏覽(95)
  • PyTorch 人工智能研討會(huì):6~7

    原文:The Deep Learning with PyTorch Workshop 協(xié)議:CC BY-NC-SA 4.0 譯者:飛龍 本文來(lái)自【ApacheCN 深度學(xué)習(xí) 譯文集】,采用譯后編輯(MTPE)流程來(lái)盡可能提升效率。 不要擔(dān)心自己的形象,只關(guān)心如何實(shí)現(xiàn)目標(biāo)?!对瓌t》,生活原則 2.3.c 概述 本章擴(kuò)展了循環(huán)神經(jīng)網(wǎng)絡(luò)的概念。 您將

    2023年04月20日
    瀏覽(97)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包