国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集

這篇具有很好參考價值的文章主要介紹了手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

??????手把手教程:教會你如何使用自己的數(shù)據(jù)集開展分割任務(wù)

??????YOLOv8-seg創(chuàng)新專欄:

學(xué)姐帶你學(xué)習(xí)YOLOv8,從入門到創(chuàng)新,輕輕松松搞定科研;
1)手把手教你如何訓(xùn)練YOLOv8-seg;
2)模型創(chuàng)新,提升分割性能;
3)獨家自研模塊助力分割;

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

1.數(shù)據(jù)集介紹

番薯破損分割任務(wù),自己手動標(biāo)注,數(shù)據(jù)集大小304張

1.數(shù)據(jù)集標(biāo)注

使用labelme進行數(shù)據(jù)集標(biāo)注,首先進行l(wèi)abelme安裝

pip install labelme

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

2.數(shù)據(jù)集格式轉(zhuǎn)換

json to txt以及?劃分為train、val、test,適配yolov8-seg

2.1 json2txt.py

# -*- coding: utf-8 -*-
import json
import os
import argparse
from tqdm import tqdm
 
 
def convert_label_json(json_dir, save_dir, classes):
    json_paths = os.listdir(json_dir)
    classes = classes.split(',')
 
    for json_path in tqdm(json_paths):
        # for json_path in json_paths:
        path = os.path.join(json_dir, json_path)
        with open(path, 'r') as load_f:
            json_dict = json.load(load_f)
        h, w = json_dict['imageHeight'], json_dict['imageWidth']
 
        # save txt path
        txt_path = os.path.join(save_dir, json_path.replace('json', 'txt'))
        txt_file = open(txt_path, 'w')
 
        for shape_dict in json_dict['shapes']:
            label = shape_dict['label']
            label_index = classes.index(label)
            points = shape_dict['points']
 
            points_nor_list = []
 
            for point in points:
                points_nor_list.append(point[0] / w)
                points_nor_list.append(point[1] / h)
 
            points_nor_list = list(map(lambda x: str(x), points_nor_list))
            points_nor_str = ' '.join(points_nor_list)
 
            label_str = str(label_index) + ' ' + points_nor_str + '\n'
            txt_file.writelines(label_str)
 
 
if __name__ == "__main__":
    """
    python json2txt_nomalize.py --json-dir my_datasets/color_rings/jsons --save-dir my_datasets/color_rings/txts --classes "cat,dogs"
    """
    parser = argparse.ArgumentParser(description='json convert to txt params')
    parser.add_argument('--json-dir', type=str,default='F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/json', help='json path dir')
    parser.add_argument('--save-dir', type=str,default='F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/txt' ,help='txt save dir')
    parser.add_argument('--classes', type=str, default='skinning',help='classes')
    args = parser.parse_args()
    json_dir = args.json_dir
    save_dir = args.save_dir
    classes = args.classes
    convert_label_json(json_dir, save_dir, classes)

2.2?劃分為train、val、test

# 將圖片和標(biāo)注數(shù)據(jù)按比例切分為 訓(xùn)練集和測試集
import shutil
import random
import os
import argparse
 
 
# 檢查文件夾是否存在
def mkdir(path):
    if not os.path.exists(path):
        os.makedirs(path)
 
def main(image_dir, txt_dir, save_dir):
    # 創(chuàng)建文件夾
    mkdir(save_dir)
    images_dir = os.path.join(save_dir, 'images')
    labels_dir = os.path.join(save_dir, 'labels')
 
    img_train_path = os.path.join(images_dir, 'train')
    img_test_path = os.path.join(images_dir, 'test')
    img_val_path = os.path.join(images_dir, 'val')
 
    label_train_path = os.path.join(labels_dir, 'train')
    label_test_path = os.path.join(labels_dir, 'test')
    label_val_path = os.path.join(labels_dir, 'val')
 
    mkdir(images_dir);
    mkdir(labels_dir);
    mkdir(img_train_path);
    mkdir(img_test_path);
    mkdir(img_val_path);
    mkdir(label_train_path);
    mkdir(label_test_path);
    mkdir(label_val_path);
 
    # 數(shù)據(jù)集劃分比例,訓(xùn)練集75%,驗證集15%,測試集15%,按需修改
    train_percent = 0.85
    val_percent = 0.15
    test_percent = 0
 
    total_txt = os.listdir(txt_dir)
    num_txt = len(total_txt)
    list_all_txt = range(num_txt)  # 范圍 range(0, num)
 
    num_train = int(num_txt * train_percent)
    num_val = int(num_txt * val_percent)
    num_test = num_txt - num_train - num_val
 
    train = random.sample(list_all_txt, num_train)
    # 在全部數(shù)據(jù)集中取出train
    val_test = [i for i in list_all_txt if not i in train]
    # 再從val_test取出num_val個元素,val_test剩下的元素就是test
    val = random.sample(val_test, num_val)
 
    print("訓(xùn)練集數(shù)目:{}, 驗證集數(shù)目:{},測試集數(shù)目:{}".format(len(train), len(val), len(val_test) - len(val)))
    for i in list_all_txt:
        name = total_txt[i][:-4]
 
        srcImage = os.path.join(image_dir, name + '.jpg')
        srcLabel = os.path.join(txt_dir, name + '.txt')
 
        if i in train:
            dst_train_Image = os.path.join(img_train_path, name + '.jpg')
            dst_train_Label = os.path.join(label_train_path, name + '.txt')
            shutil.copyfile(srcImage, dst_train_Image)
            shutil.copyfile(srcLabel, dst_train_Label)
        elif i in val:
            dst_val_Image = os.path.join(img_val_path, name + '.jpg')
            dst_val_Label = os.path.join(label_val_path, name + '.txt')
            shutil.copyfile(srcImage, dst_val_Image)
            shutil.copyfile(srcLabel, dst_val_Label)
        else:
            dst_test_Image = os.path.join(img_test_path, name + '.jpg')
            dst_test_Label = os.path.join(label_test_path, name + '.txt')
            shutil.copyfile(srcImage, dst_test_Image)
            shutil.copyfile(srcLabel, dst_test_Label)
 
 
if __name__ == '__main__':
    """
    python split_datasets.py --image-dir my_datasets/color_rings/imgs --txt-dir my_datasets/color_rings/txts --save-dir my_datasets/color_rings/train_data
    """
    parser = argparse.ArgumentParser(description='split datasets to train,val,test params')
    parser.add_argument('--image-dir', type=str,default='F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/images', help='image path dir')
    parser.add_argument('--txt-dir', type=str,default='F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/txt' , help='txt path dir')
    parser.add_argument('--save-dir', default='F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/split', type=str, help='save dir')
    args = parser.parse_args()
    image_dir = args.image_dir
    txt_dir = args.txt_dir
    save_dir = args.save_dir
 
    main(image_dir, txt_dir, save_dir)

3.如何訓(xùn)練yolov8-seg

3.1?skinning.yaml配置

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/split  # dataset root dir
train: F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/split/images/train  # train images (relative to 'path') 4 images
val: F:/DL/Pytorch/yolov8/ultralytics-seg/data/skinning/split/images/val  # val images (relative to 'path') 4 images
test:  # test images (optional)


nc: 1


# Classes
names:
  0: skinning
 

3.2?如何訓(xùn)練

from ultralytics.cfg import entrypoint
arg="yolo segment train model=yolov8-seg0.yaml data=ultralytics/cfg/datasets/skinning.yaml"

entrypoint(arg)

3.3?yolov8-seg.yaml

# Ultralytics YOLO ??, AGPL-3.0 license
# YOLOv8-seg instance segmentation model. For Usage examples see https://docs.ultralytics.com/tasks/segment

# Parameters
nc: 1  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-seg.yaml' will call yolov8-seg.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]
  s: [0.33, 0.50, 1024]
  m: [0.67, 0.75, 768]
  l: [1.00, 1.00, 512]
  x: [1.00, 1.25, 512]

# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]]  # 9

# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 6], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 12

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 4], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 15 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 12], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 18 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 9], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 21 (P5/32-large)

  - [[15, 18, 21], 1, Segment, [nc, 32, 256]]  # Segment(P3, P4, P5)

4.訓(xùn)練結(jié)果可視化

Mask?map0.5?原始為0.625

MaskF1_curve

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

?MaskP_curve

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

MaskPR_curve

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

MaskR_curve

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)

?預(yù)測結(jié)果:

手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集,YOLOv8-seg創(chuàng)新,YOLO,人工智能,算法,計算機視覺,學(xué)習(xí)文章來源地址http://www.zghlxwxcb.cn/news/detail-743500.html

到了這里,關(guān)于手把手教程 | YOLOv8-seg訓(xùn)練自己的分割數(shù)據(jù)集的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包