国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

YOLOV5 + 雙目測(cè)距(python)

這篇具有很好參考價(jià)值的文章主要介紹了YOLOV5 + 雙目測(cè)距(python)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

YOLOV5 + 雙目相機(jī)實(shí)現(xiàn)三維測(cè)距

1. zed + yolov7 實(shí)現(xiàn)雙目測(cè)距
2. zed+yolov4實(shí)現(xiàn)雙目測(cè)距(直接調(diào)用,免標(biāo)定)
3. zed+yolov5實(shí)現(xiàn)雙目測(cè)距(直接調(diào)用,免標(biāo)定)
4. 本文具體實(shí)現(xiàn)效果已在嗶哩嗶哩發(fā)布,點(diǎn)擊跳轉(zhuǎn)(歡迎投幣點(diǎn)贊)
5. 如果有用zed相機(jī)的,可以參考我上邊的兩邊文章??????直接調(diào)用內(nèi)部相機(jī)參數(shù),精度比雙目測(cè)距好很多

下載鏈接1: https://download.csdn.net/download/qq_45077760/87233955 (CSDN)
下載鏈接2:https://github.com/up-up-up-up/yolov5_ceju (github)

注:我所做的是在10m以?xún)?nèi)的檢測(cè),沒(méi)計(jì)算過(guò)具體誤差,當(dāng)然標(biāo)定誤差越小精度會(huì)好一點(diǎn),其次注意光線(xiàn)、亮度等影響因素,當(dāng)然檢測(cè)范圍效果跟相機(jī)的好壞也有很大關(guān)系

在三維測(cè)距基礎(chǔ)上做了三維跟蹤,詳見(jiàn) YOLOv5+雙目實(shí)現(xiàn)三維跟蹤(python) 和 YOLOv7+雙目實(shí)現(xiàn)三維跟蹤(python)

1. 項(xiàng)目流程

大致流程: 雙目標(biāo)定→雙目校正→立體匹配→結(jié)合yolov5→深度測(cè)距

  1. 找到目標(biāo)識(shí)別源代碼中輸出物體坐標(biāo)框的代碼段。
  2. 找到雙目測(cè)距代碼中計(jì)算物體深度的代碼段。
  3. 將步驟2與步驟1結(jié)合,計(jì)算得到目標(biāo)框中物體的深度。
  4. 找到目標(biāo)識(shí)別網(wǎng)絡(luò)中顯示障礙物種類(lèi)的代碼段,將深度值添加到里面,進(jìn)行顯示

2. 測(cè)距原理

如果想了解雙目測(cè)距原理,請(qǐng)移步該文章 雙目三維測(cè)距(python)

3. 代碼部分解析

雙目相機(jī)參數(shù)stereoconfig.py
雙目相機(jī)標(biāo)定誤差越小越好,我這里誤差為0.1,盡量使誤差在0.2以下

import numpy as np
# 雙目相機(jī)參數(shù)
class stereoCamera(object):
    def __init__(self):

        self.cam_matrix_left = np.array([[1101.89299, 0, 1119.89634],
                                         [0, 1100.75252, 636.75282],
                                         [0, 0, 1]])
        self.cam_matrix_right = np.array([[1091.11026, 0, 1117.16592],
                                          [0, 1090.53772, 633.28256],
                                          [0, 0, 1]])

        self.distortion_l = np.array([[-0.08369, 0.05367, -0.00138, -0.0009, 0]])
        self.distortion_r = np.array([[-0.09585, 0.07391, -0.00065, -0.00083, 0]])

        self.R = np.array([[1.0000, -0.000603116945856524, 0.00377055351856816],
                           [0.000608108737333211, 1.0000, -0.00132288199083992],
                           [-0.00376975166958581, 0.00132516525298933, 1.0000]])

        self.T = np.array([[-119.99423], [-0.22807], [0.18540]])
        self.baseline = 119.99423  

以下我stereo.py里的對(duì)圖像進(jìn)行處理的代碼
這些都是網(wǎng)上現(xiàn)成的,直接套用就可以

class stereo_dd:
    def __init__(self,imgl,imgr):
        self.left  = imgl
        self.right = imgr
    
    # 預(yù)處理
    def preprocess(self, img1, img2):
        # 彩色圖->灰度圖
        if(img1.ndim == 3):#判斷為三維數(shù)組
            img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)  # 通過(guò)OpenCV加載的圖像通道順序是BGR
        if(img2.ndim == 3):
            img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

        # 直方圖均衡
        img1 = cv2.equalizeHist(img1)
        img2 = cv2.equalizeHist(img2)
        return img1, img2

    '''
    # 消除畸變
    def undistortion(self, image, camera_matrix, dist_coeff):
        undistortion_image = cv2.undistort(image, camera_matrix, dist_coeff)
        return undistortion_image
    '''
    # 消除畸變
    def undistortion(self, imagleft,imagright, camera_matrix_left, camera_matrix_right, dist_coeff_left,dist_coeff_right):
        undistortion_imagleft  = cv2.undistort(imagleft,  camera_matrix_left,  dist_coeff_left )
        undistortion_imagright = cv2.undistort(imagright, camera_matrix_right, dist_coeff_right)
        return undistortion_imagleft, undistortion_imagright

    # 畸變校正和立體校正
    def rectifyImage(self, image1, image2, map1x, map1y, map2x, map2y):
        rectifyed_img1 = cv2.remap(image1, map1x, map1y, cv2.INTER_AREA)
        rectifyed_img2 = cv2.remap(image2, map2x, map2y, cv2.INTER_AREA)
        return rectifyed_img1, rectifyed_img2
        
    # 立體校正檢驗(yàn)----畫(huà)線(xiàn)
    def draw_line(self, image1, image2):
        # 建立輸出圖像
        height = max(image1.shape[0], image2.shape[0])
        width = image1.shape[1] + image2.shape[1]

        output = np.zeros((height, width, 3), dtype=np.uint8)
        output[0:image1.shape[0], 0:image1.shape[1]] = image1
        output[0:image2.shape[0], image1.shape[1]:] = image2

        # 繪制等間距平行線(xiàn)
        line_interval = 50  # 直線(xiàn)間隔:50
        for k in range(height // line_interval):
            cv2.line(output, (0, line_interval * (k + 1)), (2 * width, line_interval * (k + 1)), (0, 255, 0), thickness=2, lineType=cv2.LINE_AA)

        return output

    # 視差計(jì)算
    def stereoMatchSGBM(self, left_image, right_image, down_scale=False):
        # SGBM匹配參數(shù)設(shè)置
        if left_image.ndim == 2:
            img_channels = 1
        else:
            img_channels = 3
        blockSize = 3
        paraml = {'minDisparity': 0,
                 'numDisparities': 128,
                 'blockSize': blockSize,
                 'P1': 8 * img_channels * blockSize ** 2,
                 'P2': 32 * img_channels * blockSize ** 2,
                 'disp12MaxDiff': -1,
                 'preFilterCap': 63,
                 'uniquenessRatio': 10,
                 'speckleWindowSize': 100,
                 'speckleRange': 1,
                 'mode': cv2.STEREO_SGBM_MODE_SGBM_3WAY
                 }

        # 構(gòu)建SGBM對(duì)象
        left_matcher = cv2.StereoSGBM_create(**paraml)
        paramr = paraml
        paramr['minDisparity'] = -paraml['numDisparities']
        right_matcher = cv2.StereoSGBM_create(**paramr)

        # 計(jì)算視差圖
        size = (left_image.shape[1], left_image.shape[0])
        if down_scale == False:
            disparity_left = left_matcher.compute(left_image, right_image)
            disparity_right = right_matcher.compute(right_image, left_image)

        else:
            left_image_down = cv2.pyrDown(left_image)
            right_image_down = cv2.pyrDown(right_image)
            factor = left_image.shape[1] / left_image_down.shape[1]
            
            disparity_left_half = left_matcher.compute(left_image_down, right_image_down)
            disparity_right_half = right_matcher.compute(right_image_down, left_image_down)
            disparity_left = cv2.resize(disparity_left_half, size, interpolation=cv2.INTER_AREA)
            disparity_right = cv2.resize(disparity_right_half, size, interpolation=cv2.INTER_AREA)
            disparity_left = factor * disparity_left
            disparity_right = factor * disparity_right
            
        trueDisp_left = disparity_left.astype(np.float32) / 16.
        trueDisp_right = disparity_right.astype(np.float32) / 16.
        return trueDisp_left, trueDisp_right

測(cè)距代碼部分解析

這一部分我直接計(jì)算了目標(biāo)檢測(cè)框中心點(diǎn)的深度值,把中心點(diǎn)的深度值當(dāng)作了距離
你也可以寫(xiě)個(gè)循環(huán),計(jì)算平均值或者中位數(shù),把他們當(dāng)作深度值

if (accel_frame % fps_set == 0):
    t3 = time.time()  
    thread.join()
    points_3d = thread.get_result()
    t4 = time.time()  
    a = points_3d[int(y_0), int(x_0), 0] / 1000
    b = points_3d[int(y_0), int(x_0), 1] / 1000
    c = points_3d[int(y_0), int(x_0), 2] / 1000
    dis = ((a**2+b**2+c**2)**0.5)

這里我加入了檢測(cè)人與車(chē)之間的三維距離,分為了正常、中等、高風(fēng)險(xiǎn)三個(gè)距離等級(jí)
你也可以替換成人與人或者車(chē)與車(chē)等等

##########  plot_one_box 系列 ###########
if (distance != 0):  ## Add bbox to image
    label = f'{names[int(cls)]} {conf:.2f} '
    '''下邊這幾行如果不需要,可以改成
    plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
    我是做了分類(lèi),是為了計(jì)算人與汽車(chē)之間的距離寫(xiě)的'''
    if label is not None:
        if (label.split())[0] == 'person':
            people_coords.append(xyxy)
        if (label.split())[0] == 'car' or (label.split())[0] =='truck':
            car_coords.append(xyxy)
            #plot_dots_on_car(xyxy, im0)
        plot_one_box(xyxy, im0, label=label, color=colors[int(cls)],line_thickness=3)
##########  annotator.box_label 系列 ###########
if names[int(cls)] == "person":
     people_coords.append(xyxy)
     c = int(cls)  # integer class  整數(shù)類(lèi) 1111111111
     label = None if hide_labels else (
         names[c] if hide_conf else f'{names[c]} {conf:.2f}')  # 111
     print("x:", point_cloud_value[0], "y:", point_cloud_value[1], "z:",
           point_cloud_value[2], "dis:", distance, '', label)
     # print("dis:", distance,  "W:", wide)
     txt = '{0}  dis:{1} '.format(label, distance)
     # annotator.box_label(xyxy, txt, color=(255, 0, 255))
     annotator.box_label(xyxy, txt, color=colors(c, True))
 if names[int(cls)] == "chair":
     car_coords.append(xyxy)
     c = int(cls)  # integer class  整數(shù)類(lèi) 1111111111
     label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') # 111
     print("x:", point_cloud_value[0], "y:", point_cloud_value[1], "z:",
           point_cloud_value[2], "dis:", distance, '',label)
     #print("dis:", distance,  "W:", wide)
     txt = '{0}  dis:{1} '.format(label,distance)
     #annotator.box_label(xyxy, txt, color=(255, 0, 255))
     annotator.box_label(xyxy, txt, color = colors(c, True))
normal, intermediate, high = distancing(people_coords,car_coords, im0, intermediate, high,normal,dist_thres_lim=(2, 3))

主代碼
加入了多線(xiàn)程處理,加快處理速度


import argparse
import time
from pathlib import Path
import threading
from threading import Thread
import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random
import numpy as np
from utils.datasets import *  ##1111111111
#from utils.utils import *  # 111111111111
from PIL import Image, ImageDraw, ImageFont
from models.experimental import attempt_load
from utils.datasets import LoadStreams, LoadImages
from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \
    scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
from utils.plots import plot_one_box
from utils.torch_utils import select_device, load_classifier, time_synchronized

from stereo.dianyuntu_yolo import preprocess, undistortion, getRectifyTransform, draw_line, rectifyImage, \
    stereoMatchSGBM  # , hw3ToN3, view_cloud ,DepthColor2Cloud
from stereo import stereoconfig
from stereo.stereo import stereo_dd
from stereo.stereo import get_median, stereo_threading, MyThread
intermediate = 0
high = 0
normal = 0
people_label = " "
normal_label = " "
inter_label = " "
high_label = " "
def detect(save_img=False):
    accel_frame = 0
    source, weights, view_img, save_txt, imgsz = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
    webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
        ('rtsp://', 'rtmp://', 'http://'))

    # Initialize
    set_logging()
    device = select_device(opt.device)
    half = device.type != 'cpu'

    # Load model
    model = attempt_load(weights, map_location=device)
    stride = int(model.stride.max())
    imgsz = check_img_size(imgsz,s=stride)
    if half:
        model.half()

    # Second-stage classifier
    classify = False
    if classify:
        modelc = load_classifier(name='resnet101', n=2)
        modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval()

    # Set Dataloader
    vid_path, vid_writer = None, None
    if webcam:
        save_stream_dir = Path(
            increment_path(Path("./runs/streams") / opt.name, exist_ok=opt.exist_ok))
        (save_stream_dir / 'labels' if save_txt else save_stream_dir).mkdir(parents=True,
                                                                            exist_ok=True)

        flag = 0
        view_img = check_imshow()
        cudnn.benchmark = True
        dataset = LoadStreams(source, img_size=imgsz, stride=stride)
        # 獲取視頻信息,線(xiàn)程抓取圖片dataset類(lèi)中imgs[0]是0個(gè)攝像頭的圖片,LoadStreams是迭代類(lèi)---》dataset是一個(gè)迭代器
    else:
        # Directories
        save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
        (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)

        save_img = True
        dataset = LoadImages(source, img_size=imgsz,
                             stride=stride)
        print("img_size:")
        print(imgsz)

    names = model.module.names if hasattr(model, 'module') else model.names
    colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]

    ################################ stereo #############################
    if device.type != 'cpu':
        model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run once
    t0 = time.time()  # 整個(gè)推理過(guò)程的開(kāi)始計(jì)時(shí)


    config = stereoconfig_040_2.stereoCamera()
    # 立體校正
    map1x, map1y, map2x, map2y, Q = getRectifyTransform(720, 1280, config)
    # -----------------攝像頭從此處開(kāi)始反復(fù)循環(huán)-dataset為迭代器類(lèi)--------------------------------
    for path, img, im0s, vid_cap in dataset:
        img = torch.from_numpy(img).to(device)
        img = img.half() if half else img.float()
        img /= 255.0
        if img.ndimension() == 3:
            img = img.unsqueeze(0)
        t1 = time_synchronized()
        pred = model(img, augment=opt.augment)[0]

        pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
        t2 = time_synchronized()

        # Apply Classifier
        if classify:
            pred = apply_classifier(pred, modelc, img, im0s)

        # List to store bounding coordinates of people      1111111
        people_coords = []  # 1111111111
        car_coords = []
        # Process detections
        for i, det in enumerate(pred):
            if webcam:
                p, s, im0, frame = path[i], '%g: ' % i, im0s[
                    i].copy(), dataset.count
            else:
                p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)

            fps_set = 10  # setting the frame
            if (accel_frame % fps_set == 0):
                thread = MyThread(stereo_threading,args=(config, im0, map1x, map1y, map2x, map2y, Q))
                thread.start()
                print(threading.active_count())
                print(threading.enumerate())
                print("############## Frame is %d !##################" % accel_frame)

            p = Path(p)
            if webcam:
                save_stream_path = str(save_stream_dir / "stream0.mp4")
            else:
                save_path = str(save_dir / p.name)
                txt_path = str(save_dir / 'labels' / p.stem) + (
                    '' if dataset.mode == 'image' else f'_{frame}')

            s += '%gx%g ' % img.shape[2:]
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]

            if len(det):
                det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()
                    s += f"{n} {names[int(c)]} {'s' * (n > 1)} , "
                for *xyxy, conf, cls in reversed(det):

                    if (0 < xyxy[2] < 1280):
                        if save_txt:
                            xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()

                            print("xywh  x : %d, y : %d" % (xywh[0], xywh[1]))
                            line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)
                            with open(txt_path + '.txt', 'a') as f:
                                f.write(('%g ' * len(line)).rstrip() % line + '\n')
                        if save_img or view_img:
                            x_center = (xyxy[0] + xyxy[2]) / 2
                            y_center = (xyxy[1] + xyxy[3]) / 2
                            x_0 = int(x_center)
                            y_0 = int(y_center)
                            if (0 < x_0 < 1280):
                                x1 = xyxy[0]
                                x2 = xyxy[2]
                                y1 = xyxy[1]
                                y2 = xyxy[3]

                                if (accel_frame % fps_set == 0):
                                    t3 = time.time()  
                                    thread.join()
                                    points_3d = thread.get_result()
                                    t4 = time.time() 
                                    print(f'{s}Stereo Done. ({t4 - t3:.3f}s)')
                                    a = points_3d[int(y_0), int(x_0), 0] / 1000
                                    b = points_3d[int(y_0), int(x_0), 1] / 1000
                                    c = points_3d[int(y_0), int(x_0), 2] / 1000
                                    dis = ((a**2+b**2+c**2)**0.5)
                                distance = []
                                distance.append(dis)
                                if (distance != 0):  ## Add bbox to image
                                    label = f'{names[int(cls)]} {conf:.2f} '
                                    '''下邊這幾行如果不需要,可以改成
                                    plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
                                    我是做了分類(lèi),是為了計(jì)算人與汽車(chē)之間的距離寫(xiě)的'''
                                    if label is not None:
                                        if (label.split())[0] == 'person':
                                            people_coords.append(xyxy)

                                        if (label.split())[0] == 'car' or (label.split())[0] =='truck':
                                            car_coords.append(xyxy)
                                            #plot_dots_on_car(xyxy, im0)
                                        plot_one_box(xyxy, im0, label=label, color=colors[int(cls)],line_thickness=3)

                                    text_xy_0 = "*"
                                    cv2.putText(im0, text_xy_0, (int(x_0), int(y_0)), cv2.FONT_ITALIC, 1.2,(0, 0, 255), 3)

                                    print()
                                    print('點(diǎn) (%d, %d) 的 %s 距離左攝像頭的相對(duì)距離為 %0.2f m' % (x_center, y_center, label, distance))
                                    text_dis_avg = "dis:%0.2fm" % distance

                                    # only put dis on frame
                                    cv2.rectangle(im0, (int(x1 + (x2 - x1)), int(y1)),(int(x1 + (x2 - x1) + 5 + 210), int(y1 + 40)), colors[int(cls)],-1)  # 畫(huà)框存三維坐標(biāo)
                                    cv2.putText(im0, text_dis_avg, (int(x1 + (x2 - x1) + 5), int(y1 + 30)),cv2.FONT_ITALIC, 1.2, (255, 255, 255), 3)
                                    '''同理,下邊這一行如果不需要可以去除,我也是做行人與車(chē)輛之間距離用的'''
                                    normal, intermediate, high = distancing(people_coords,car_coords, im0, intermediate, high,normal,dist_thres_lim=(2, 3))

            t5 = time_synchronized()  # stereo time end
            print(f'{s}yolov5 Done. ({t2 - t1:.3f}s)')


            if (accel_frame % fps_set == 0):
                print(f'{s}yolov5+stereo Done. ({t5 - t1:.3f}s)')

            if cv2.waitKey(1) & 0xFF == ord('q'):
                if save_txt or save_img:
                    s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''

                if view_img:
                    s = save_stream_path
                print(f"Results saved to {s}")
                print(f'All Done. ({time.time() - t0:.3f}s)')
                vid_writer.release()
                exit()

            # Stream results
            if view_img:
                if (dataset.mode == 'stream') & (flag == 0):
                    if isinstance(vid_writer, cv2.VideoWriter):
                        vid_writer.release()  # release previous video writer

                    fourcc = 'mp4v'  # output video codec
                    fps = 24  # vid_cap.get(cv2.CAP_PROP_FPS)
                    w = 2560  # int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                    h = 720  # int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                    print("save_stream_dir is %s" % save_stream_dir)
                    print("save_stream_path is %s" % save_stream_path)
                    vid_writer = cv2.VideoWriter(save_stream_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h))
                    flag = 1

                vid_writer.write(im0)
                cv2.namedWindow("Webcam", cv2.WINDOW_NORMAL)
                cv2.resizeWindow("Webcam", 1280, 480)
                cv2.moveWindow("Webcam", 0, 100)
                cv2.imshow("Webcam", im0)
                cv2.waitKey(1)

            # Save results
            if save_img:
                if dataset.mode == 'image':
                    cv2.imwrite(save_path, im0)
                else:
                    if vid_path != save_path:
                        vid_path = save_path
                        if isinstance(vid_writer, cv2.VideoWriter):
                            vid_writer.release()

                        fourcc = 'mp4v'
                        fps = 24
                        w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                        h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                        vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h))
                    vid_writer.write(im0)
                    cv2.namedWindow("Video", cv2.WINDOW_NORMAL)
                    cv2.resizeWindow("Video", 1280, 480)
                    cv2.moveWindow("Video", 0, 0)
                    cv2.imshow("Video", im0)
                    cv2.waitKey(1)

        print("frame %d is done!" % accel_frame)
        accel_frame += 1
    if save_txt or save_img:
        s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
        print(f"Results saved to {save_dir}{s}")
    print(f'All Done. ({time.time() - t0:.3f}s)')


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default='./yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--source', type=str, default='./data/video/test.mp4',help='source')
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='display results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default='runs/detect', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    opt = parser.parse_args()
    print(opt)
    check_requirements()
    with torch.no_grad():
        if opt.update:
            for opt.weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']:
                detect()
                strip_optimizer(opt.weights)
        else:
            detect()


有點(diǎn)亂,后續(xù)會(huì)慢慢完善,下邊是測(cè)距后的圖,精度不是很高

4. 實(shí)驗(yàn)結(jié)果

單目標(biāo)測(cè)距

YOLOV5 + 雙目測(cè)距(python)
多目標(biāo)測(cè)距

YOLOV5 + 雙目測(cè)距(python)

目標(biāo)之間三維距離檢測(cè)計(jì)數(shù)
YOLOV5 + 雙目測(cè)距(python)
檢測(cè)效果(視頻展示)

文章內(nèi)容后續(xù)會(huì)慢慢完善…文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-415624.html

到了這里,關(guān)于YOLOV5 + 雙目測(cè)距(python)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 單目測(cè)距(車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警)-基于yolov8/yolov7/yolov5——畢業(yè)設(shè)計(jì)

    單目測(cè)距(車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警)-基于yolov8/yolov7/yolov5——畢業(yè)設(shè)計(jì)

    一、開(kāi)發(fā)環(huán)境 部署平臺(tái):英偉達(dá)的Jetson Nano 環(huán)境:Linux + ROS 語(yǔ)言:C++ 設(shè)備:1920*1080像素的攝像頭、開(kāi)發(fā)板。 模型:yolo-v8s 二、單目測(cè)距實(shí)現(xiàn)思路 0、標(biāo)定相機(jī)和車(chē)輛(假設(shè)已經(jīng)標(biāo)定完成) 1、通過(guò)yolo對(duì)目標(biāo)檢測(cè),獲得ROI區(qū)域 2、根據(jù)ROI計(jì)算車(chē)輛和地面接觸的中心點(diǎn) 3、根據(jù)車(chē)

    2024年02月02日
    瀏覽(36)
  • yolov5車(chē)道線(xiàn)檢測(cè)+測(cè)距(碰撞檢測(cè))

    相關(guān)鏈接 1. 基于yolov5的車(chē)道線(xiàn)檢測(cè)及安卓部署 2. YOLOv5#

    2024年02月08日
    瀏覽(24)
  • YOLOV5 + PYQT5單目測(cè)距(四)

    YOLOV5 + PYQT5單目測(cè)距(四)

    系統(tǒng):win 10 YOLO版本:yolov5 5.0 拍攝視頻設(shè)備:安卓手機(jī) 電腦顯卡:NVIDIA 2080Ti(CPU也可以跑,GPU只是起到加速推理效果) 詳見(jiàn)文章 YOLOV5 + 單目測(cè)距(python) 首先安裝一下pyqt5 接著再pycharm設(shè)置里配置一下 添加下面兩個(gè)工具: 工具1:Qt Designer 工具2:PyUIC 實(shí)驗(yàn)采用的是一個(gè)博主

    2024年02月08日
    瀏覽(24)
  • YOLOv5車(chē)輛測(cè)距實(shí)踐:利用目標(biāo)檢測(cè)技術(shù)實(shí)現(xiàn)車(chē)輛距離估算

    YOLOv5目標(biāo)檢測(cè)技術(shù)進(jìn)行車(chē)輛測(cè)距。相信大家對(duì)YOLOv5已經(jīng)有所了解,它是一種快速且準(zhǔn)確的目標(biāo)檢測(cè)算法。接下來(lái),讓我們一起探討如何通過(guò)YOLOv5實(shí)現(xiàn)車(chē)輛距離估算。這次的實(shí)踐將分為以下幾個(gè)步驟: 安裝所需庫(kù)和工具 數(shù)據(jù)準(zhǔn)備 模型訓(xùn)練 距離估算 可視化結(jié)果 優(yōu)化 1. 安裝所需

    2024年02月02日
    瀏覽(21)
  • yolov8/yolov5-車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警(追尾預(yù)警)+車(chē)輛檢測(cè)識(shí)別+車(chē)輛跟蹤測(cè)速(算法-畢業(yè)設(shè)計(jì))

    yolov8/yolov5-車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警(追尾預(yù)警)+車(chē)輛檢測(cè)識(shí)別+車(chē)輛跟蹤測(cè)速(算法-畢業(yè)設(shè)計(jì))

    本項(xiàng)目效果展示視頻: https://www.bilibili.com/video/BV14d4y177vE/?spm_id_from=333.999.0.0vd_source=8c532ded7c7c9041f04e35940d11fdae 1、本項(xiàng)目通過(guò)yolov8/yolov7/yolov5和deepsort實(shí)現(xiàn)了一個(gè)自動(dòng)駕駛領(lǐng)域的追尾前車(chē)碰撞預(yù)警系統(tǒng),可為一些同學(xué)的課設(shè)、大作業(yè)等提供參考。分別實(shí)現(xiàn)了自行車(chē)、汽車(chē)、摩托車(chē)

    2024年02月06日
    瀏覽(31)
  • yolov8/yolov7/yolov5-車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警(追尾預(yù)警)+車(chē)輛檢測(cè)識(shí)別+車(chē)輛跟蹤測(cè)速(算法-畢業(yè)設(shè)計(jì))

    yolov8/yolov7/yolov5-車(chē)輛測(cè)距+前車(chē)碰撞預(yù)警(追尾預(yù)警)+車(chē)輛檢測(cè)識(shí)別+車(chē)輛跟蹤測(cè)速(算法-畢業(yè)設(shè)計(jì))

    本項(xiàng)目效果展示視頻: https://www.bilibili.com/video/BV14d4y177vE/?spm_id_from=333.999.0.0vd_source=8c532ded7c7c9041f04e35940d11fdae 1、本項(xiàng)目通過(guò)yolov8/yolov7/yolov5和deepsort實(shí)現(xiàn)了一個(gè)自動(dòng)駕駛領(lǐng)域的追尾前車(chē)碰撞預(yù)警系統(tǒng),可為一些同學(xué)的課設(shè)、大作業(yè)等提供參考。分別實(shí)現(xiàn)了自行車(chē)、汽車(chē)、摩托車(chē)

    2024年02月04日
    瀏覽(30)
  • 項(xiàng)目設(shè)計(jì):YOLOv5目標(biāo)檢測(cè)+機(jī)構(gòu)光相機(jī)(intel d455和d435i)測(cè)距

    項(xiàng)目設(shè)計(jì):YOLOv5目標(biāo)檢測(cè)+機(jī)構(gòu)光相機(jī)(intel d455和d435i)測(cè)距

    1.1??Intel D455 Intel D455 是一款基于結(jié)構(gòu)光(Structured Light)技術(shù)的深度相機(jī)。 與ToF相機(jī)不同,結(jié)構(gòu)光相機(jī)使用另一種方法來(lái)獲取物體的深度信息。它通過(guò)投射可視光譜中的紅外結(jié)構(gòu)光圖案,然后從被拍攝物體表面反射回來(lái)的圖案重建出其三維形狀和深度信息。 Intel D455 深度相機(jī)

    2024年02月08日
    瀏覽(33)
  • 雙目測(cè)距--3 雙目標(biāo)定

    雙目測(cè)距--3 雙目標(biāo)定

    目錄 -1 流程說(shuō)明: 0 幾個(gè)重要 函數(shù) 1、calibrateCamera()函數(shù) 2、stereoCalibrate() 3、findChessboardCorners() 棋盤(pán)格角點(diǎn)檢測(cè) 4、stereoRectify() 5、initUndistortRectifyMap() 6、remap()? 1、用于標(biāo)定的圖像 ?2、標(biāo)定前 3、OpenCV進(jìn)行雙目標(biāo)定 單目標(biāo)定 calibration.h 雙目標(biāo)定 doule--camera--calibration.h 主函數(shù)

    2024年02月09日
    瀏覽(20)
  • OpenCV C++雙目三維重建:雙目攝像頭實(shí)現(xiàn)雙目測(cè)距

    OpenCV C++雙目三維重建:雙目攝像頭實(shí)現(xiàn)雙目測(cè)距

    目錄 OpenCV C++雙目三維重建:雙目攝像頭實(shí)現(xiàn)雙目測(cè)距 1.目錄結(jié)構(gòu) 2.依賴(lài)庫(kù) ?(1) Ubuntu 18.04配置開(kāi)發(fā)環(huán)境 ?(2) Windows配置開(kāi)發(fā)環(huán)境 3.雙目相機(jī)標(biāo)定 ?(1)雙目相機(jī)標(biāo)定-Python版 ?(2)雙目相機(jī)標(biāo)定-Matlab版 4.相機(jī)參數(shù)配置 5. 雙目測(cè)距 6. 運(yùn)行Demo 7.雙目測(cè)距的誤差說(shuō)明 8. 雙目三維重建項(xiàng)

    2024年02月02日
    瀏覽(46)
  • 使用opencv做雙目測(cè)距(相機(jī)標(biāo)定+立體匹配+測(cè)距)

    使用opencv做雙目測(cè)距(相機(jī)標(biāo)定+立體匹配+測(cè)距)

    最近在做雙目測(cè)距,覺(jué)得有必要記錄點(diǎn)東西,所以我的第一篇博客就這么誕生啦~ 雙目測(cè)距屬于立體視覺(jué)這一塊,我覺(jué)得應(yīng)該有很多人踩過(guò)這個(gè)坑了,但網(wǎng)上的資料依舊是云里霧里的,要么是理論講一大堆,最后發(fā)現(xiàn)還不知道怎么做,要么就是直接代碼一貼,讓你懵逼。 所以

    2024年01月20日
    瀏覽(22)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包