這里記錄下yolov5 tag7.0的實例分割,因為也用過paddle家族的實例分割,能夠訓練出來,但是開放restiful api時遇到點小問題,還是yolov爽?。?!通過這篇博文,您可以一步步的搭建自己的分割網(wǎng)絡。
前言
git倉庫:https://github.com/ultralytics/yolov5/tree/v7.0
在tag7.0開始支持的,號稱sota,在master下的英文介紹中,有這句話,是realtime的sota。
yolv6,yolov7也都在號稱sota,大家都是sota。。。。
中文介紹到時沒有這個,看來是十分新的研究成果了。
一、小試牛刀
yolov5-7.0/segment/
下,有個tutorials.ipynb文件,這上面說的很清楚,如何訓練、如何推理預測,和目標檢測的用法幾乎是一模一樣的。
關(guān)于如何安裝自行解決吧。
如何預測:
python segment/predict.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
‘path/*.jpg’ # glob
‘https://youtu.be/Zgi9g1ksQHc’ # YouTube
‘rtsp://example.com/media.mp4’ # RTSP, RTMP, HTTP stream
python segment/predict.py --weights yolov5s-seg.pt --img 640 --conf 0.25 --source data/images
#display.Image(filename='runs/predict-seg/exp/zidane.jpg', width=600)
如何訓練:
Train YOLOv5s on COCO128 for 3 epochs
!python segment/train.py --img 640 --batch 16 --epochs 3 --data coco128-seg.yaml --weights yolov5s-seg.pt --cache
1. 預訓練權(quán)重
https://github.com/ultralytics/yolov5/releases/v7.0
2. coco128 數(shù)據(jù)集在這里
download: https://ultralytics.com/assets/coco128-seg.zip
3.coco128-seg的數(shù)據(jù)初探
連碼放的格式都和目標檢測一模一樣
45 是指的類別, 和coco128-seg.yaml文件的names相對應
后面是 x,y x,y …的坐標。分別對應寬和高,需要特別說明的是,這是歸一化的。
細心如我,一定會將他們反寫到圖片,看看啥情況了啊,反寫代碼如下
def get_a_coco_pic():
pic_path = r"C:\Users\jianming_ge\Downloads\coco128-seg\images\train2017\000000000009.jpg"
txt_path = r"C:\Users\jianming_ge\Downloads\coco128-seg\labels\train2017\000000000009.txt"
import cv2
img = cv2.imread(pic_path)
height, width, _ = img.shape
print(height, width)
# cv2.imshow("111",img)
# 顯示原始圖片
# cv2.waitKey()
# 勾勒多邊形
file_handle = open(txt_path)
cnt_info = file_handle.readlines()
new_cnt_info = [line_str.replace("\n", "").split(" ") for line_str in cnt_info]
print(len(new_cnt_info))
print("---====---")
# 45 bowl 碗 49 橘子 50 西蘭花
color_map = {"49": (0, 255, 255), "45": (255, 0, 255), "50": (255, 255, 0)}
for new_info in new_cnt_info:
print(new_info)
s = []
for i in range(1, len(new_info), 2):
b = [float(tmp) for tmp in new_info[i:i + 2]]
s.append([int(b[0] * width), int(b[1] * height)])
print(s)
cv2.polylines(img, [np.array(s, np.int32)], True, color_map.get(new_info[0]))
cv2.imshow('img2', img)
cv2.waitKey()
效果也貼出來,現(xiàn)在還找到y(tǒng)olov官方提供的回寫標注到圖片的代碼,但以我對yolov的了解,不出幾天就會出來。
類別是45 bowl 碗 49 橘子 50 西蘭花 好吧,那個黃黃的是橘子。
然后就可以拿著這個coco128訓練玩兒了,但是你一定不會滿足于此,哪我還用寫這個blog么。直接看yolov5的readme.md 不是也能搞定么。
看下面,我們用yolov5做一個路面積水
二、自有數(shù)據(jù)集路面積水
1.數(shù)據(jù)介紹
一共550張,lambelme標注,效果如下:
但是labelme的標注是json格式的,需要做一次轉(zhuǎn)換。
這還是自己準備好的數(shù)據(jù)集,只需要批量轉(zhuǎn)換一下,自己標注分割就費時費力了啊。(廣告:本數(shù)據(jù)集有償提供,私信我即可),主要是歸一化一下
2.標注文件的轉(zhuǎn)化:
需要安裝他的要求轉(zhuǎn),轉(zhuǎn)換腳本如下:
def convert_json_label_to_yolov_seg_label():
import glob
import numpy as np
json_path = r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street";
json_files = glob.glob(json_path + "/*.json")
for json_file in json_files:
# if json_file != r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street\223.json":
# continue
print(json_file)
f = open(json_file)
json_info = json.load(f)
# print(json_info.keys())
img = cv2.imread(os.path.join(json_path, json_info["imagePath"]))
height, width, _ = img.shape
np_w_h = np.array([[width, height]], np.int32)
txt_file = json_file.replace(".json", ".txt")
f = open(txt_file, "a")
for point_json in json_info["shapes"]:
txt_content = ""
np_points = np.array(point_json["points"], np.int32)
norm_points = np_points / np_w_h
norm_points_list = norm_points.tolist()
txt_content += "0 " + " ".join([" ".join([str(cell[0]), str(cell[1])]) for cell in norm_points_list]) + "\n"
f.write(txt_content)
會對應產(chǎn)生.txt 文件,和之前yolov分割例子的coco-128一致
整個數(shù)據(jù)集的對應關(guān)系:
1.jpg 是原始圖片,1.json是labelme標注的圖片,1.txt是yolov分割所需要的格式
3.標注再驗證
轉(zhuǎn)化完成一定要再次驗證一遍,否則會死的很慘。因為算法工程師80%的時間,都耗費再數(shù)據(jù)上,而訓練只是一行命令的事兒。
def check_convert_json_label_to_yolov_seg_label():
"""
驗證一下對不對
:return:
"""
import glob
import numpy as np
import cv2
txt_path = r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street";
txt_files = glob.glob(txt_path + "/*.txt")
for txt_file in txt_files:
# if json_file != r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street\223.json":
# continue
print(txt_file)
pic_path = txt_file.replace(".txt", ".jpg")
img = cv2.imread(pic_path)
height, width, _ = img.shape
print(height, width)
# cv2.imshow("111",img)
# 顯示原始圖片
# cv2.waitKey()
# 勾勒多邊形
file_handle = open(txt_file)
cnt_info = file_handle.readlines()
new_cnt_info = [line_str.replace("\n", "").split(" ") for line_str in cnt_info]
print(len(new_cnt_info))
print("---====---")
# 45 bowl 碗 49 橘子 50 西蘭花
color_map = {"49": (0, 255, 255), "45": (255, 0, 255), "50": (255, 255, 0)}
for new_info in new_cnt_info:
print(new_info)
s = []
for i in range(1, len(new_info), 2):
b = [float(tmp) for tmp in new_info[i:i + 2]]
s.append([int(b[0] * width), int(b[1] * height)])
print(s)
cv2.polylines(img, [np.array(s, np.int32)], True, color_map.get(new_info[0]))
cv2.imshow('img2', img)
cv2.waitKey()
你會看到這樣的圖片:
證明轉(zhuǎn)化的標注文件沒有問題。
4.分割數(shù)據(jù)集
下面就是要按1:9 或者2:8的比例分開數(shù)據(jù)集,因為數(shù)據(jù)集才550張,不是特別大,所以我會在代碼中重新建目錄,把圖片和標注文件復制過來。這樣并不動舊的數(shù)據(jù)集,這是一個好習慣!
數(shù)據(jù)集拆分的代碼如下:
def split_dataset():
# 為防止數(shù)據(jù)混亂,執(zhí)行此腳本前,先將'C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street\yolov_format'清空
"""
:return:
"""
import glob
import shutil
import random
txt_path = r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street"
txt_files = glob.glob(txt_path + "/*.txt")
# 基礎圖片文件夾
images_base_dir = r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street\yolov_format\images"
# 基礎標注文件夾
labels_base_dir = r"C:\Users\jianming_ge\Desktop\code\handle_dataset\water_street\yolov_format\labels"
# 訓練集圖片文件夾
images_train_dir = os.path.join(images_base_dir, "train")
# 訓練集標注文件夾
labels_train_dir = os.path.join(labels_base_dir, "train")
# 驗證集圖片文件夾
images_val_dir = os.path.join(images_base_dir, "val")
# 驗證集標注文件夾
labels_val_dir = os.path.join(labels_base_dir, "val")
# 生成所需4個文件夾
[make_new_dir(dir_path) for dir_path in [images_train_dir, labels_train_dir, images_val_dir, labels_val_dir]]
# 驗證集數(shù)據(jù)的比例,可以自定義成任何你所需要的比例
val_rate = 0.1
for txt_ori_path in txt_files:
fpath, fname = os.path.split(txt_ori_path) # 分離文件名和路徑
if random.randint(1, 10) == 10 * val_rate:
# 驗證集數(shù)據(jù)
txt_dst_path = os.path.join(labels_val_dir, fname)
img_dst_path = os.path.join(images_val_dir, fname.replace(".txt", ".jpg"))
else:
# 訓練集
txt_dst_path = os.path.join(labels_train_dir, fname)
img_dst_path = os.path.join(images_train_dir, fname.replace(".txt", ".jpg"))
# 執(zhí)行復制
# 圖片都是jpg,且和原始txt文件在同一個目錄,所以可以這么寫
img_ori_path = txt_ori_path.replace(".txt", ".jpg")
# 移動標注文件
shutil.copy(txt_ori_path, txt_dst_path)
# 移動圖片文件
shutil.copy(img_ori_path, img_dst_path)
執(zhí)行完,會多這個目錄,和coco128-seg的一樣:
ok完事具備,開始訓練吧!
三、訓練
1.構(gòu)建配置文件
water.yaml
照著coco128-seg.yaml抄一份,改成自己目錄結(jié)構(gòu)即可
path: /data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/ # dataset root dir
train: images/train
val: images/val
test: # test images (optional)
# Classes
names:
0: water
這里需要多啰嗦一句,配置文件只需要體現(xiàn)圖片的路徑即可
比如train,最終代碼會到這個目錄下加載圖片:
/data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/images/train
然后智能(暴力)把images替換成labels,也就是說去:/data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/labels/train 下找標注
反正配置文件的寫法有好幾種,我就照著coco128-seg超了一份。
這里插入一個知識點, 為什么test下人家給寫的是optional,引出的問題是在啥時候不需要test數(shù)據(jù)集?
瓜書上說,train是用來xxx,val是用來ooo,test數(shù)據(jù)集是用來xxoo。
那么問題來了,那是書上寫的,我的問題是,什么時候不需要test數(shù)據(jù)集,少廢話,出答案:
在train、val、test都來自一個概率分布的時候,是不需要test的,仔細想想,都來自一個概率分布,其實連val都可以不用。當然,val是用來防止模型過擬合的,根據(jù)loss曲線挑選一個train和val表現(xiàn)都還不錯的模型。
就目前這個分割網(wǎng)絡來說,我們沒有拿到生產(chǎn)數(shù)據(jù),還不知道是啥情況,只能先訓練一個初版,然后試運行(預生產(chǎn)時)再去迭代優(yōu)化了。話說,我這里已經(jīng)在生產(chǎn)攝像頭下,下雨時,把收集了一些積水的視頻。
2.訓練
發(fā)車、發(fā)車。。
前面已經(jīng)說過了和目標檢測訓練的命令一模一樣,祖上富裕,3張卡,走你
python -m torch.distributed.launch --nproc_per_node=3 segment/train.py --img 640 --batch 48 --epochs 300 --data water-seg.yaml --weights weights/yolov5m-seg.pt --workers 16 --save-period 20 --cache
上面幾個參數(shù)自行查一下他的意思
卡的數(shù)據(jù)和batch 一定是能被整出才行,比如我有3張卡,那么batch要是3的倍數(shù)。
一切正常:
3.選擇模型
300輪次下來,每20輪保留一個模型,
可以看到last.pt 就是best.pt
從result.png 上看
選取200-300輪直接的一個模型是合適的,
本來用tensorboard看更直觀的,可是我的tensorboard報錯,尚不清楚咋回事。解決了兩個小時,無解,翻過來正過去就是那么兩句話。
就用best.pt 推理一下試試,把runs/exp 下的best.pt 復制到weights/下,并且重命名為best-seg.pt
python segment/predict.py --weights weights/best-seg.pt --img 640 --conf 0.25 --source /data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/images/val
#display.Image(filename='runs/predict-seg/exp/zidane.jpg', width=600)
麻蛋,把環(huán)境搞壞了,這可是我的主環(huán)境啊啊?。?br> Uninstalling torchvision-0.2.2:
Successfully uninstalled torchvision-0.2.2
Successfully installed dataclasses-0.6 torch-1.7.0 torchvision-0.8.1
報錯說需要torchvision>=0.8.1, 我的是0.2.2, 然后我pip install torchvision==0.8.1, 結(jié)果給我把torch1.8卸載了,重新安裝的torch1.7,
然后重新運行命令,就報錯了
Traceback (most recent call last):
File "segment/predict.py", line 274, in <module>
main(opt)
File "segment/predict.py", line 269, in main
run(**vars(opt))
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "segment/predict.py", line 99, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
File "/home/jianming_ge/workplace/yolov5-7.0/models/common.py", line 345, in __init__
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "/home/jianming_ge/workplace/yolov5-7.0/models/experimental.py", line 80, in attempt_load
ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 490, in float
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
File "/home/jianming_ge/workplace/yolov5-7.0/models/yolo.py", line 155, in _apply
self = super()._apply(fn)
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/jianming_ge/miniconda3/envs/py38_torch180/lib/python3.8/site-packages/torch/nn/modules/module.py", line 490, in <lambda>
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
RuntimeError: CUDA error: no kernel image is available for execution on the device
唉鍋從天上來啊,和cuda有不匹配了,算求,又重啟了一個環(huán)境,安裝了新環(huán)境。深度學習,有一半時間再安裝環(huán)境,另一半時間制作docker…
py39_torch1.10.1
# 基礎環(huán)境
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# yolov所需環(huán)境
pip install -r requirements.txt
# 其它的再報錯再說吧,唉,倒霉啊??!比如flask fastapi kafka mysql sharply之類的,求大牛指導管理環(huán)境啊
在等這個安裝的同時,又重新將原環(huán)境裝了一次:torch 從1.8 更換成1.8.2 因為LST版本。很快就裝完了,然后執(zhí)行推理也好使。太奇怪了。不知道怎么動了環(huán)境
pip install torch==1.8.2 torchvision==0.9.2 torchaudio==0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111
也有未識別的
再看看tensorbord好用了沒
還是沒有,其實我已經(jīng)pip install tensorboard了,主要是想很直觀的看loss,先這樣吧,等環(huán)境py39_torch1.10.1弄好了,再運行下看是否成功。
四、模型轉(zhuǎn)化
1.轉(zhuǎn)化細節(jié)
.pt 到.onnx
根據(jù)我之前的博文,onnx對cpu非常友好,能夠提升10倍的推理速度,從1-2s 下降到 0.1-0.2s
onnx的安裝可以看我之前的博文
轉(zhuǎn)化命令:
python export.py --weights weights/best-seg.pt --include onnx
驗證速度:
python segment/predict.py --weights weights/best-seg.onnx --img 640 --conf 0.25 --source /data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/images/val
cpu 的onnx 下是0.15s,作為對比gpu的.pt 是0.015s,證明cpu下的分割網(wǎng)絡是可行的。
2.推理返回值代表
# Run inference
self.model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
dt = (Profile(), Profile(), Profile())
with dt[0]:
im = torch.from_numpy(im).to(self.device)
im = im.half() if self.model.fp16 else im.float() # uint8 to fp16/32
im /= 255 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3:
im = im[None] # expand for batch dim
# Inference
with dt[1]:
visualize = False
pred, proto = self.model(im, augment=False, visualize=visualize)[:2]
# NMS
with dt[2]:
pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, self.retain_classes,nm=32)
for i, det in enumerate(pred): # per image
im0 = im0.copy()
im0 = im0.copy()
segments = None
annotator = Annotator(im0, line_width=3, example=str(names))
if len(det):
masks = process_mask(proto[i], det[:, 6:], det[:, :4], im.shape[2:], upsample=True) # HWC
det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # rescale boxes to im0 size
# Segments
save_txt =True
if save_txt:
segments = reversed(masks2segments(masks))
segments = [scale_segments(im.shape[2:], x, im0.shape, normalize=True) for x in segments]
# Print results
s = ""
for c in det[:, 5].unique():
n = (det[:, 5] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
# Mask plotting
retina_masks = False
annotator.masks(masks,
colors=[colors(x, True) for x in det[:, 5]],
im_gpu=None if retina_masks else im[i])
這是我略加修改后的代碼,原始代碼在segment/predict.py下
pred, proto = self.model(im, augment=False, visualize=visualize)[:2]
這個里面的proto也是重要的,代表了分割的實例組成的list,細節(jié)的分割點位
det 代表有無識別到目標其里面的內(nèi)容很風豐富,但是經(jīng)過nms之后
non_max_suppression函數(shù)的注釋是,是將pred經(jīng)過非極大抑制之后的矩形框
list of detections, on (n,6) tensor per image [xyxy, conf, cls]
四個坐標點,置信度,類別值,這是我打印出來的det,看樣子不是(n,6) 我的理解是前6個元素的含義是明確的,先存疑??
tensor([[ 3.14000e+02, 2.53000e+02, 9.28000e+02, 6.02000e+02, 9.83171e-01, 0.00000e+00, 5.36699e-01, -5.76064e-01, -3.26006e-01, 1.20082e+00, 3.36137e-01, -4.49288e-01, -1.76419e-01, 7.95439e-01, 4.11805e-01, -2.94021e-01, -1.07274e+00, 3.74787e-01, 7.30362e-01, -5.80536e-01, 1.28794e+00, 1.05980e+00,
7.34846e-01, -6.37928e-01, 5.95232e-01, 7.47005e-01, -5.02438e-01, 4.93569e-01, -3.65522e-01, 3.31907e-01, 2.75088e-01, -1.21060e+00, -7.28429e-01, 4.78636e-01, 1.70226e-01, -7.33963e-01, -5.29957e-01, 3.69660e-01],
[ 2.00000e+00, 2.05000e+02, 2.84000e+02, 6.03000e+02, 9.76957e-01, 0.00000e+00, 4.87412e-01, -4.98163e-01, -4.37511e-01, 1.22402e+00, 2.67139e-01, -4.17416e-01, -1.08538e-01, 7.58215e-01, 4.04070e-01, -3.91520e-01, -7.94110e-01, -2.26625e-02, 7.35040e-01, -3.86938e-01, 1.27367e+00, 6.53839e-01,
9.14556e-01, -4.18411e-01, 7.33185e-01, 4.69820e-01, -2.65769e-01, 3.17441e-01, -2.13026e-01, 2.10853e-01, 1.38901e-01, -1.21001e+00, -6.82711e-01, 6.36819e-01, 3.86214e-01, -6.94586e-01, -6.36750e-01, 3.26956e-01]])
五、ONNX 內(nèi)存泄漏
美好的時光總是短暫的,量化也沒有那么好,起碼在onnx的cpu版本是有問題的。bug在這里:
https://github.com/microsoft/onnxruntime/issues/9313
發(fā)現(xiàn)它是因為,服務器登陸不上了,關(guān)機了, 原因是內(nèi)存和cpu都頂爆了。
內(nèi)存持續(xù)升高,存在內(nèi)存泄漏的問題, 不是pytorch 或者 yolov的問題,是onnx的問題。
初始狀態(tài):
35分鐘之后:
我是用docker的方式啟動的 只有要圖片送入,mem usage會一直增加。。。而且進入docker內(nèi)部 殺死推理服務外側(cè)docker的監(jiān)控的mem usage依然不會往下掉,只是不在變化,觀察一下,半天干到6G的內(nèi)存了, 所以重啟大法了,一小時重啟一次。
后面有時間,再根進下,這個是個大活兒,我還挺有興趣的,雖然自己很菜,人菜癮大。
您若是直接命令啟動的服務,則可以用腳本控制,每隔一段時間重啟一次。
分割線:20230331 ------------------------------------------------------
六 遺傳參數(shù)的用處
今天再次迭代算法,想要嘗試copy paste的方式來增加樣本量,但是我大yolov已經(jīng)給我們準備好了,實在是太貼心了。
有好處就有缺點,用了–evolve 就不能用一機多卡的形式了。 這不得不說是個遺憾。
報錯單獨貼一下:
AssertionError: --evolve is not compatible with YOLOv5 Multi-GPU DDP training
實際運行命令:
python segment/train.py --img 640 --batch 12 --epochs 300 --data water-seg.yaml --weights weights/yolov5m-seg.pt --workers 2 --save-period 20 --cache --evolve
實際是用到了這里。
# Train
if not opt.evolve:
train(opt.hyp, opt, device, callbacks)
# Evolve hyperparameters (optional)
else:
# Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
meta = {
'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
'box': (1, 0.02, 0.2), # box loss gain
'cls': (1, 0.2, 4.0), # cls loss gain
'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
'iou_t': (0, 0.1, 0.7), # IoU training threshold
'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
'scale': (1, 0.0, 0.9), # image scale (+/- gain)
'shear': (1, 0.0, 10.0), # image shear (+/- deg)
'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
'mosaic': (1, 0.0, 1.0), # image mixup (probability)
'mixup': (1, 0.0, 1.0), # image mixup (probability)
'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability)
不加----evolve 參數(shù)的話, else 這里面的一坨優(yōu)化手動應該都是用不到的。包括hsv,mosaic,mixup,copy_pase等等。(存疑)
TensorBoard: Start with 'tensorboard --logdir runs/train-seg', view at http://localhost:6006/
tensorboard --logdir runs/train-seg --host=0.0.0.0
新裝的環(huán)境發(fā)現(xiàn)報錯:
2023-04-04 16:02:14.485000: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot op
七 關(guān)于圖片通道問題的一個bug 20230411發(fā)現(xiàn)
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
0/299 5.74G 0.1204 0.07856 0.02553 0 23 640: 35%|███▌ | 12/34 00:07libpng warning: sBIT: invalid
0/299 5.74G 0.1188 0.07694 0.02574 0 33 640: 41%|████ | 14/34 00:08libpng warning: sBIT: invalid
0/299 5.74G 0.1108 0.07117 0.02713 0 28 640: 71%|███████ | 24/34 00:11libpng warning: sBIT: invalid
0/299 5.75G 0.1029 0.06394 0.02723 0 10 640: 100%|██████████| 34/34 00:13
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 8/8 00:02
all 188 215 0.28 0.251 0.142 0.06 0.212 0.195 0.082 0.0289
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
1/299 7.32G 0.07902 0.04293 0.02481 0 40 640: 32%|███▏ | 11/34 00:02libpng warning: sBIT: invalid
1/299 7.32G 0.07798 0.04315 0.02462 0 27 640: 38%|███▊ | 13/34 00:03libpng warning: sBIT: invalid
1/299 7.32G 0.07878 0.04556 0.02474 0 38 640: 62%|██████▏ | 21/34 00:05libpng warning: sBIT: invalid
1/299 7.32G 0.07873 0.04521 0.02454 0 36 640: 68%|██████▊ | 23/34 00:05libpng warning: sBIT: invalid
1/299 7.32G 0.07862 0.04364 0.02392 0 34 640: 82%|████████▏ | 28/34 00:06libpng warning: sBIT: invalid
1/299 7.32G 0.07739 0.04325 0.0235 0 8 640: 100%|██████████| 34/34 00:07
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 25%|██▌ | 2/8 00:00libpng warning: sBIT: invalid
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 8/8 00:01
all 188 215 0.334 0.279 0.166 0.101 0.291 0.251 0.13 0.0608
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
2/299 7.32G 0.06256 0.03825 0.0188 0 27 640: 3%|▎ | 1/34 00:00libpng warning: sBIT: invalid
2/299 7.32G 0.07064 0.04049 0.02257 0 40 640: 18%|█▊ | 6/34 00:01libpng warning: sBIT: invalid
2/299 7.32G 0.07023 0.03491 0.02087 0 22 640: 44%|████▍ | 15/34 00:03libpng warning: sBIT: invalid
2/299 7.32G 0.06993 0.03416 0.02027 0 25 640: 50%|█████ | 17/34 00:04libpng warning: sBIT: invalid
2/299 7.32G 0.06714 0.03363 0.01893 0 6 640: 100%|██████████| 34/34 00:07
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 8/8 00:01
all 188 215 0.359 0.326 0.167 0.0631 0.235 0.228 0.0824 0.0265
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
3/299 7.32G 0.06512 0.03397 0.02116 0 37 640: 3%|▎ | 1/34 00:00libpng warning: sBIT: invalid
3/299 7.32G 0.06598 0.04451 0.01951 0 32 640: 9%|▉ | 3/34 00:00libpng warning: sBIT: invalid
3/299 7.32G 0.06442 0.03769 0.01774 0 21 640: 18%|█▊ | 6/34 00:01libpng warning: sBIT: invalid
3/299 7.32G 0.06639 0.03573 0.02024 0 19 640: 38%|███▊ | 13/34 00:03libpng warning: sBIT: invalid
3/299 7.32G 0.06504 0.03499 0.02038 0 29 640: 76%|███████▋ | 26/34 00:06^Z
warning :00:03libpng warning: sBIT: invalid
出現(xiàn)這個問題是因為有的圖片是四通道的,帶透明度。要先轉(zhuǎn)換成三通道才可以,
修復后進度條都可以走到100%,否則遇到四通道的圖片就break down了。
pytorch1.8 肯定是沒這個問題, 現(xiàn)在用的1.10 是有問題的。
修復腳本也很簡單:
import glob
import glob
import os
from PIL import Image
base_dir_list = ['/data_share/data_share/city_manager_20221017/water_street_coco_version2022124_yolo/yolov_format/images/train','/data_share/data_share/bad_case_water20230119/final/train/images','/data_share/data_share/city_manager_20221017/bad_case_water20240331/final/images']
for base_dir in base_dir_list:
imglist = glob.glob(base_dir + "/*.jpg")
for imgpath in imglist:
print(imgpath)
image = Image.open(imgpath)
# image = image.resize((128, 128)) # 批量處理圖像大小
image = image.convert("RGB") # 4通道轉(zhuǎn)化為rgb三通道
image.save(imgpath)
八 補充負例
20230427,今天補充了500+負例,多卡訓練時發(fā)現(xiàn)了這個問題,先記錄一下,還為解決
Model summary: 302 layers, 21671158 parameters, 21671158 gradients, 70.2 GFLOPs
Transferred 493/499 items from weights/yolov5m-seg.pt
AMP: checks passed ?
optimizer: SGD(lr=0.01) with parameter groups 82 weight(decay=0.0), 85 weight(decay=0.00046875), 85 bias
train: Scanning /data_share/data_share/bad_case_water20230119/final/train/labels.cache... 1376 images, 872 backgrounds, 0 corrupt: 100%|██████████| 1376/1376 00:00
val: Scanning /data_share/data_share/bad_case_water20230119/final/val/labels.cache... 767 images, 652 backgrounds, 0 corrupt: 100%|██████████| 767/767 00:00
AutoAnchor: 2.78 anchors/target, 0.994 Best Possible Recall (BPR). Current anchors are a good fit to dataset ?
Plotting labels to runs/train-seg/exp4/labels.jpg...
Image sizes 640 train, 640 val
Using 0 dataloader workers
Logging results to runs/train-seg/exp4
Starting training for 300 epochs...
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
0/299 3.09G 0.08411 0.04014 0.02043 0 5 640: 100%|██████████| 115/115 01:17
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 64/64 00:19
all 767 215 0.231 0.144 0.0901 0.0377 0.208 0.13 0.067 0.027
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
1/299 3.74G 0.06376 0.02859 0.01463 0 8 640: 100%|██████████| 115/115 01:12
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 64/64 00:18
all 767 215 0.302 0.205 0.145 0.0623 0.282 0.167 0.11 0.0449
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
2/299 3.74G 0.05911 0.02605 0.01249 0 1 640: 100%|██████████| 115/115 01:12
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 64/64 00:18
all 767 215 0.146 0.191 0.0664 0.0275 0.102 0.135 0.04 0.0152
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
3/299 3.74G 0.05821 0.02656 0.013 0 3 640: 100%|██████████| 115/115 01:11
Class Images Instances Box(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|██████████| 64/64 00:18
all 767 215 0.127 0.256 0.0634 0.0207 0.114 0.223 0.0468 0.0146
Epoch GPU_mem box_loss seg_loss obj_loss cls_loss Instances Size
4/299 3.74G 0.05413 0.02566 0.01132 0 0 640: 78%|███████▊ | 90/115 00:57
Traceback (most recent call last):
File "/home/jianming_ge/workplace/road_water_yolov5-7.0/segment/train.py", line 659, in <module>
main(opt)
File "/home/jianming_ge/workplace/road_water_yolov5-7.0/segment/train.py", line 555, in main
train(opt.hyp, opt, device, callbacks)
File "/home/jianming_ge/workplace/road_water_yolov5-7.0/segment/train.py", line 309, in train
pred = model(imgs) # forward
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 873, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 243 244 245 246 247 248 249 250 251
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 15743 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 15742) of binary: /home/jianming_ge/miniconda3/envs/py39_torch1.10.1/bin/python
Traceback (most recent call last):
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
segment/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-04-27_18:44:32
host : localhost.localdomain
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 15742)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
九、另一個bug 20230509
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/726.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/730.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/732.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/734.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/736.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/740.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/742.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/744.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/748.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/750.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/752.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/754.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/756.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/760.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/762.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/766.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/768.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/772.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/776.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/780.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/782.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/784.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/790.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/792.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/794.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/800.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/802.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/804.jpg: corrupt JPEG restored and saved
train: WARNING ?? /data_share/data_share/city_manager_20221017/water_street_20230509/images/808.jpg: corrupt JPEG restored and saved
應該是最近一次補充樣本的數(shù)據(jù)集沒處理好,還未找到原因。待定!查詢了一下warning,不影響訓練,但估計會影響訓練效果,因為這個圖片感覺沒有讀取成功,或者讀取成功了。我也不太清楚,知道的大佬歡迎留言。
----------------------------20230721---------------------------
這個報錯的代碼找到了,那是因為它走到了這里:
ref:https://huggingface.co/spaces/nakamura196/yolov5-ndl-layout/blob/447b47ec77e6ea46fef0abba2594b11de7874676/ultralytics/yolov5/utils/datasets.py
if im.format.lower() in ('jpg', 'jpeg'):
with open(im_file, 'rb') as f:
f.seek(-2, 2)
if f.read() != b'\xff\xd9': # corrupt JPEG
ImageOps.exif_transpose(Image.open(im_file)).save(im_file, 'JPEG', subsampling=0, quality=100)
msg = f'{prefix}WARNING: {im_file}: corrupt JPEG restored and saved'
這是訓練另外一個模型的時候發(fā)現(xiàn)的錯誤,先把它記錄下來,然后看如何修復。另外這個多卡訓練沒有報上面的錯誤,但是報了另外一個錯誤:
AutoAnchor: 6.24 anchors/target, 0.999 Best Possible Recall (BPR). Current anchors are a good fit to dataset ?
Plotting labels to runs/train/exp3/labels.jpg...
Image sizes 640 train, 640 val
Using 32 dataloader workers
Logging results to runs/train/exp3
Starting training for 500 epochs...
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
0/499 7.48G 0.1133 0.02796 0.02854 64 640: 0%| | 1/447 [00:04<33:41, 4.53s/it]Reducer buckets have been rebuilt in this iteration.
0/499 7.54G 0.109 0.02788 0.02835 93 640: 4%|▎ | 16/447 [00:10<02:08, 3.34it/s]WARNING:torch.distributed.elastic.agent.server.api:Received 1 death signal, shutting down workers
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 9802 closing signal SIGHUP
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 9803 closing signal SIGHUP
Traceback (most recent call last):
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 252, in launch_agent
result = agent.run()
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
result = self._invoke_run(role)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 843, in _invoke_run
time.sleep(monitor_interval)
File "/home/jianming_ge/miniconda3/envs/py39_torch1.10.1/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 60, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 9751 got signal: 1
將batch_size 和 worker 調(diào)低后,就不報錯了
python -m torch.distributed.launch --nproc_per_node=2 train.py --weights weights/yolov5m6_coco.pt --img 640 --epoch 500 --data fire_smoke.yaml --batch-size 24 --workers 8 --save-period 20
懷疑之前是因為顯卡扛不住造成的,bs從36挑成24后,還占80%,90%呢。
訓練完成,遇到了 early stop 挺好的 哈哈哈文章來源:http://www.zghlxwxcb.cn/news/detail-462316.html
延伸問題:
- 我這里按 8:1:1 分了 train, val 和 test 數(shù)據(jù)集, 那他上面的P 0.815 R 0.75 是根據(jù)啥計算出來的? val or test
- 我是在yolov5m6.pt 上進行的ft, 那么用yolov5l6.pt 和 yolov8 會有什么不同么?
這篇文章太長了,我準備再換一篇來寫。
總結(jié)
就先寫到這里把,等著模型好了,看看效果繼續(xù)寫。
后面就是封裝api了,已經(jīng)有現(xiàn)成的框架和套路了,美滋滋。
再打一次廣告,需要數(shù)據(jù)集私信我,but 有償啊文章來源地址http://www.zghlxwxcb.cn/news/detail-462316.html
到了這里,關(guān)于【深度學習】yolov5 tag7.0 實例分割 從0到1的體會,從模型訓練,到量化完成,bug避坑的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!