国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

兩百行C++代碼實(shí)現(xiàn)yolov5車輛計(jì)數(shù)部署(通俗易懂版)

這篇具有很好參考價(jià)值的文章主要介紹了兩百行C++代碼實(shí)現(xiàn)yolov5車輛計(jì)數(shù)部署(通俗易懂版)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

本文是文章傳統(tǒng)圖像處理方法實(shí)現(xiàn)車輛計(jì)數(shù)的后續(xù)。這里用OpenCV實(shí)現(xiàn)了基于yolov5檢測(cè)器的單向車輛計(jì)數(shù)功能,方法是撞線計(jì)數(shù)。該代碼只能演示視頻demo效果,一些功能未完善,離實(shí)際工程應(yīng)用還有距離。
實(shí)現(xiàn)流程:
(1)訓(xùn)練yolov5模型,這里就沒(méi)有自己訓(xùn)練了,直接使用官方的開(kāi)源模型yolov5s.pt;
(2)運(yùn)行yolov5工程下面的export.py,將pt模型轉(zhuǎn)成onnx模型;
(3)編寫(xiě)yolov5部署的C++工程,包括前處理、推理和后處理部分;
(4)讀取視頻第一幀,用yolov5檢測(cè)第一幀圖像的車輛目標(biāo),計(jì)算這些檢測(cè)框的中心點(diǎn),
(5)讀取視頻的后續(xù)幀,用yolov5檢測(cè)每幀圖像上的車輛目標(biāo),計(jì)算新目標(biāo)和上一幀圖像中檢測(cè)框中心點(diǎn)的距離矩陣;
(6)通過(guò)距離矩陣確定新舊目標(biāo)檢測(cè)框之間的對(duì)應(yīng)關(guān)系;
(7)計(jì)算對(duì)應(yīng)新舊目標(biāo)檢測(cè)框中心點(diǎn)之間的連線,判斷和事先設(shè)置的虛擬撞線是否相交,若相交則計(jì)數(shù)加1;
(8)重復(fù)(5)-(7)。
由于程序在CPU端運(yùn)行,為了提速,實(shí)際實(shí)現(xiàn)的時(shí)候采取的是隔幀判斷而不是使用相鄰幀,v1的代碼實(shí)現(xiàn)如下:

#include <iostream>
#include <fstream>
#include <opencv2/opencv.hpp>


// 常量
const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.5;
const float NMS_THRESHOLD = 0.45;
const float CONFIDENCE_THRESHOLD = 0.45;

const std::vector<std::string> class_name = {
"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
"fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
"elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
"skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
"sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
"potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
"microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
"hair drier", "toothbrush" };


// 畫(huà)框函數(shù)
void draw_label(cv::Mat& input_image, std::string label, int left, int top)
{
	int baseLine;
	cv::Size label_size = cv::getTextSize(label, 0.7, 0.7, 1, &baseLine);
	top = std::max(top, label_size.height);
	cv::Point tlc = cv::Point(left, top);
	cv::Point brc = cv::Point(left , top + label_size.height + baseLine);
	cv::putText(input_image, label, cv::Point(left, top + label_size.height), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 255, 255), 1);
}


// 預(yù)處理
std::vector<cv::Mat> preprocess(cv::Mat& input_image, cv::dnn::Net& net)
{
	cv::Mat blob;
	cv::dnn::blobFromImage(input_image, blob, 1. / 255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);

	net.setInput(blob);

	std::vector<cv::Mat> preprcess_image;
	net.forward(preprcess_image, net.getUnconnectedOutLayersNames());

	return preprcess_image;
}


// 后處理
std::vector<cv::Rect> postprocess(std::vector<cv::Mat>& preprcess_image, cv::Mat& output_image)
{
	std::vector<int> class_ids;
	std::vector<float> confidences;
	std::vector<cv::Rect> boxes;
	std::vector<cv::Rect> boxes_nms;

	float x_factor = output_image.cols / INPUT_WIDTH;
	float y_factor = output_image.rows / INPUT_HEIGHT;

	float* data = (float*)preprcess_image[0].data;

	const int dimensions = 85;
	const int rows = 25200;
	for (int i = 0; i < rows; ++i)
	{
		float confidence = data[4];
		if (confidence >= CONFIDENCE_THRESHOLD)
		{
			float* classes_scores = data + 5;
			cv::Mat scores(1, class_name.size(), CV_32FC1, classes_scores);
			cv::Point class_id;
			double max_class_score;
			cv::minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
			if (max_class_score > SCORE_THRESHOLD)
			{
				confidences.push_back(confidence);
				class_ids.push_back(class_id.x);

				float cx = data[0];
				float cy = data[1];
				float w = data[2];
				float h = data[3];
				int left = int((cx - 0.5 * w) * x_factor);
				int top = int((cy - 0.5 * h) * y_factor);
				int width = int(w * x_factor);
				int height = int(h * y_factor);
				boxes.push_back(cv::Rect(left, top, width, height));
			}
		}
		data += 85;
	}

	std::vector<int> indices;
	cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, indices);
	for (size_t i = 0; i < indices.size(); i++)
	{
		int idx = indices[i];
		cv::Rect box = boxes[idx];
		boxes_nms.push_back(box);

		int left = box.x;
		int top = box.y;
		int width = box.width;
		int height = box.height;
		cv::rectangle(output_image, cv::Point(left, top), cv::Point(left + width, top + height), cv::Scalar(255, 0, 0), 1);

		std::string label = cv::format("%.2f", confidences[idx]);
		label = class_name[class_ids[idx]] + ":" + label;
		draw_label(output_image, label, left, top);
	}
	return boxes_nms;
}


std::vector<cv::Point> get_centers(std::vector<cv::Rect> detections)
{
	std::vector<cv::Point> detections_centers(detections.size());
	for (size_t i = 0; i < detections.size(); i++)
	{
		detections_centers[i] = cv::Point(detections[i].x + detections[i].width / 2, detections[i].y + detections[i].height / 2);
	}

	return detections_centers;
}


float get_distance(cv::Point p1, cv::Point p2)
{
	return sqrt(pow(p1.x - p2.x, 2) + pow(p1.y - p2.y, 2));
}


bool is_cross(cv::Point p1, cv::Point p2)
{
	if (p1.x == p2.x) return false;

	int y = 500;  //line1: y = 500
	float k = (p1.y - p2.y) / (p1.x - p2.x);  //
	float b = p1.y - k * p1.x; //line2: y = kx + b
	float x = (y - b) / k;
	return (x > std::min(p1.x, p2.x) && x < std::max(p1.x, p2.x));
}


int main(int argc, char** argv)
{
	cv::VideoCapture capture("test.mp4");
	cv::Mat frame;
	cv::dnn::Net net = cv::dnn::readNet("yolov5s-f32.onnx");

	int frame_num = 0;
	int count = 0;
	std::vector<cv::Point> detections_centers_old;
	std::vector<cv::Point> detections_centers_new;

	while(cv::waitKey(1) < 0)
	{
	    capture >> frame;
		if (frame.empty())
			break;

		std::cout << "******************************************************************* frame_num: " << frame_num << std::endl;

		cv::Mat image = frame.clone();
		std::vector<cv::Mat> preprcess_image = preprocess(image, net);

		std::vector<cv::Rect> detections = postprocess(preprcess_image, image);

		if (frame_num == 0)
		{
			detections_centers_old = get_centers(detections);

			std::cout << "detections_center:" << std::endl;
			for (size_t i = 0; i < detections_centers_old.size(); i++)
			{
				std::cout << detections_centers_old[i] << std::endl;
			}
		}
		else if (frame_num % 2 == 0)
		{
			detections_centers_new = get_centers(detections);

			std::cout << "detections_center:" << std::endl;
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				std::cout << detections_centers_new[i] << std::endl;
			}

			std::vector<std::vector<float>> distance_matrix(detections_centers_new.size(), std::vector<float>(detections_centers_old.size()));
			std::cout << "distance_matrix:" << std::endl;
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				for (size_t j = 0; j < detections_centers_old.size(); j++)
				{
					distance_matrix[i][j] = get_distance(detections_centesr_new[i], detections_centers_old[j]); //
					std::cout << distance_matrix[i][j] << " ";
				}
				std::cout << std::endl;
			}

			std::cout << "min_index:" << std::endl;
			std::vector<float> min_indices(detections_centers_new.size());
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				std::vector<float> distance_vector = distance_matrix[i];
				int min_index = std::min_element(distance_vector.begin(), distance_vector.end()) - distance_vector.begin();
				min_indices[i] = min_index;
				std::cout << min_index << " ";
			}
			std::cout << std::endl;

			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				cv::Point p1 = detections_centers_new[i];
				cv::Point p2 = detections_centers_old[min_indices[i]];
				std::cout << p1 << " " << p2 << std::endl;

				if (is_cross(p1, p2))
				{
					std::cout << "is_cross" << p1 << " " << p2 << std::endl;
					count++;
				}
			}
			detections_centers_old = detections_centers_new;
		}

		frame_num++;

		cv::putText(image, "car num: " + std::to_string(count), cv::Point(20, 50), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 255, 255), 1);
		cv::line(image, cv::Point(0, 500), cv::Point(1280, 500) , cv::Scalar(0, 0, 255));
		cv::imshow("output", image);
		cv::imwrite(std::to_string(frame_num) + ".jpg", image);
	}

	capture.release();
	return 0;
}

在調(diào)試中,發(fā)現(xiàn)v1的實(shí)現(xiàn)存在如下問(wèn)題:出現(xiàn)新目標(biāo)的時(shí)候,計(jì)算新舊檢測(cè)框的對(duì)應(yīng)關(guān)系出現(xiàn)匹配錯(cuò)誤,導(dǎo)致計(jì)數(shù)偏多。因此在v2中設(shè)置匹配的距離閾值,并簡(jiǎn)化了判斷檢測(cè)框中心點(diǎn)連線和撞線是否相交的方法。
v2的代碼實(shí)現(xiàn)如下:

#include <iostream>
#include <opencv2/opencv.hpp>


//#define DEBUG


// 常量
const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.5;
const float NMS_THRESHOLD = 0.25;
const float CONFIDENCE_THRESHOLD = 0.5;

const std::vector<std::string> class_name = {
	"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
	"fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
	"elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
	"skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
	"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
	"sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
	"potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
	"microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
	"hair drier", "toothbrush" };

const int IMAGE_WIDTH = 1280;
const int IMAGE_HEIGHT = 720;
const int LINE_HEIGHT = IMAGE_HEIGHT / 2;


//畫(huà)出檢測(cè)框和標(biāo)簽
void draw_label(cv::Mat& input_image, std::string label, int left, int top)
{
	int baseLine;
	cv::Size label_size = cv::getTextSize(label, 0.7, 0.7, 1, &baseLine);
	top = std::max(top, label_size.height);
	cv::Point tlc = cv::Point(left, top);
	cv::Point brc = cv::Point(left , top + label_size.height + baseLine);
	cv::putText(input_image, label, cv::Point(left, top + label_size.height), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 255, 255), 1);
}


//預(yù)處理
std::vector<cv::Mat> preprocess(cv::Mat& input_image, cv::dnn::Net& net)
{
	cv::Mat blob;
	cv::dnn::blobFromImage(input_image, blob, 1. / 255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);

	net.setInput(blob);

	std::vector<cv::Mat> preprcess_image;
	net.forward(preprcess_image, net.getUnconnectedOutLayersNames());

	return preprcess_image;
}


//后處理
std::vector<cv::Rect> postprocess(std::vector<cv::Mat>& preprcess_image, cv::Mat& output_image)
{
	std::vector<int> class_ids;
	std::vector<float> confidences;
	std::vector<cv::Rect> boxes;
	std::vector<cv::Rect> boxes_nms;

	float x_factor = output_image.cols / INPUT_WIDTH;
	float y_factor = output_image.rows / INPUT_HEIGHT;

	float* data = (float*)preprcess_image[0].data;

	const int dimensions = 85;
	const int rows = 25200;
	for (int i = 0; i < rows; ++i)
	{
		float confidence = data[4];
		if (confidence >= CONFIDENCE_THRESHOLD)
		{
			float* classes_scores = data + 5;
			cv::Mat scores(1, class_name.size(), CV_32FC1, classes_scores);
			cv::Point class_id;
			double max_class_score;
			cv::minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
			if (max_class_score > SCORE_THRESHOLD)
			{
				confidences.push_back(confidence);
				class_ids.push_back(class_id.x);

				float cx = data[0];
				float cy = data[1];
				float w = data[2];
				float h = data[3];
				int left = int((cx - 0.5 * w) * x_factor);
				int top = int((cy - 0.5 * h) * y_factor);
				int width = int(w * x_factor);
				int height = int(h * y_factor);
				boxes.push_back(cv::Rect(left, top, width, height));
			}
		}
		data += 85;
	}

	std::vector<int> indices;
	cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, indices);
	for (size_t i = 0; i < indices.size(); i++)
	{
		int idx = indices[i];
		cv::Rect box = boxes[idx];
		boxes_nms.push_back(box);

		int left = box.x;
		int top = box.y;
		int width = box.width;
		int height = box.height;
		cv::rectangle(output_image, cv::Point(left, top), cv::Point(left + width, top + height), cv::Scalar(255, 0, 0), 1);

		std::string label = cv::format("%.2f", confidences[idx]);
		//label = class_name[class_ids[idx]] + ":" + label;
		label = "car";
		draw_label(output_image, label, left, top);
	}

	return boxes_nms;
}


//計(jì)算檢測(cè)框的中心
std::vector<cv::Point> get_centers(std::vector<cv::Rect> detections)
{
	std::vector<cv::Point> detections_centers(detections.size());
	for (size_t i = 0; i < detections.size(); i++)
	{
		detections_centers[i] = cv::Point(detections[i].x + detections[i].width / 2, detections[i].y + detections[i].height / 2);
	}

	return detections_centers;
}


//計(jì)算兩點(diǎn)間距離
float get_distance(cv::Point p1, cv::Point p2)
{
	return sqrt(pow(p1.x - p2.x, 2) + pow(p1.y - p2.y, 2));
}


//判斷連接相鄰兩幀對(duì)應(yīng)檢測(cè)框中心的線段是否與紅線相交
bool is_cross(cv::Point p1, cv::Point p2)
{
	return (p1.y <= LINE_HEIGHT && p2.y > LINE_HEIGHT) || (p1.y > LINE_HEIGHT && p2.y <= LINE_HEIGHT);
}


int main(int argc, char** argv)
{
	cv::VideoCapture capture("test.mp4");
	cv::Mat frame;
	cv::dnn::Net net = cv::dnn::readNet("yolov5s-f32.onnx");

	int frame_num = 0;
	int count = 0;
	std::vector<cv::Point> detections_centers_old;
	std::vector<cv::Point> detections_centers_new;

	while(cv::waitKey(1) < 0)
	{
	    capture >> frame;
		if (frame.empty())
			break;

		std::cout << "******************************************************************* frame_num: " << frame_num << std::endl;

		cv::Mat image = frame.clone();
		std::vector<cv::Mat> preprcess_image = preprocess(image, net);

		std::vector<cv::Rect> detections = postprocess(preprcess_image, image);

		if (frame_num == 0)
		{
			detections_centers_old = get_centers(detections);

#ifdef DEBUG
			std::cout << "detections_center:" << std::endl;
			for (size_t i = 0; i < detections_centers_old.size(); i++)
			{
				std::cout << detections_centers_old[i] << std::endl;
			}
#endif // DEBUG
		}
		else if (frame_num % 2 == 0)
		{
			detections_centers_new = get_centers(detections);

#ifdef DEBUG
			std::cout << "detections_center:" << std::endl;
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				std::cout << detections_centers_new[i] << std::endl;
			}
#endif // DEBUG

			std::vector<std::vector<float>> distance_matrix(detections_centers_new.size(), std::vector<float>(detections_centers_old.size())); //距離矩陣
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				for (size_t j = 0; j < detections_centers_old.size(); j++)
				{
					distance_matrix[i][j] = get_distance(detections_centers_new[i], detections_centers_old[j]); 
				}
			}

#ifdef DEBUG
			std::cout << "min_index:" << std::endl;
#endif // DEBUG

			std::vector<float> min_indices(detections_centers_new.size());
			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				std::vector<float> distance_vector = distance_matrix[i];
				float min_val = *std::min_element(distance_vector.begin(), distance_vector.end());
				int min_index = -1;
				if (min_val < LINE_HEIGHT / 5)
					 min_index = std::min_element(distance_vector.begin(), distance_vector.end()) - distance_vector.begin();
				
				min_indices[i] = min_index;
#ifdef DEBUG
				std::cout << min_index << " ";
#endif // DEBUG
			}
			std::cout << std::endl;

			for (size_t i = 0; i < detections_centers_new.size(); i++)
			{
				if (min_indices[i] < 0)
					continue;

				cv::Point p1 = detections_centers_new[i];
				cv::Point p2 = detections_centers_old[min_indices[i]];

#ifdef DEBUG
				std::cout << p1 << " " << p2 << std::endl;
#endif // DEBUG

				if (is_cross(p1, p2))
				{
#ifdef DEBUG
					std::cout << "is_cross" << p1 << " " << p2 << std::endl;
#endif // DEBUG
					count++;
				}
			}

			detections_centers_old = detections_centers_new;
		}

		cv::putText(image, "car num: " + std::to_string(count), cv::Point(20, 50), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 0, 255), 1);
		cv::line(image, cv::Point(0, LINE_HEIGHT), cv::Point(IMAGE_WIDTH, LINE_HEIGHT), cv::Scalar(0, 0, 255));
		cv::imshow("output", image);

#ifdef DEBUG
		if (frame_num % 2 == 0)
			cv::imwrite(std::to_string(frame_num) + ".jpg", image);
#endif // DEBUG

		frame_num++;
	}

	capture.release();
	return 0;
}

檢測(cè)效果如下圖,效果還是可以的,比傳統(tǒng)方法有大幅提升。完整視頻中有一次計(jì)數(shù)異常(總共52輛向圖像上方行駛的車輛,檢出53輛),是因?yàn)闄z測(cè)器不準(zhǔn)導(dǎo)致車輛檢測(cè)框位置漂移,可以后續(xù)優(yōu)化。注:由于官方提供的coco80類的開(kāi)源權(quán)重文件用于車輛檢測(cè)效果不是很好,LZ把檢測(cè)出的類別直接固定為car,實(shí)際應(yīng)自己重新訓(xùn)練一個(gè)車輛檢測(cè)的模型。
兩百行C++代碼實(shí)現(xiàn)yolov5車輛計(jì)數(shù)部署(通俗易懂版)

更詳細(xì)注釋的代碼、測(cè)試視頻和轉(zhuǎn)好的權(quán)重文件放在下載鏈接:點(diǎn)擊跳轉(zhuǎn)

后續(xù)文章:
目標(biāo)跟蹤算法實(shí)現(xiàn)車輛計(jì)數(shù)
五十行python代碼實(shí)現(xiàn)yolov8車輛計(jì)數(shù)文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-459576.html

到了這里,關(guān)于兩百行C++代碼實(shí)現(xiàn)yolov5車輛計(jì)數(shù)部署(通俗易懂版)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • YOLOv5車輛測(cè)距實(shí)踐:利用目標(biāo)檢測(cè)技術(shù)實(shí)現(xiàn)車輛距離估算

    YOLOv5目標(biāo)檢測(cè)技術(shù)進(jìn)行車輛測(cè)距。相信大家對(duì)YOLOv5已經(jīng)有所了解,它是一種快速且準(zhǔn)確的目標(biāo)檢測(cè)算法。接下來(lái),讓我們一起探討如何通過(guò)YOLOv5實(shí)現(xiàn)車輛距離估算。這次的實(shí)踐將分為以下幾個(gè)步驟: 安裝所需庫(kù)和工具 數(shù)據(jù)準(zhǔn)備 模型訓(xùn)練 距離估算 可視化結(jié)果 優(yōu)化 1. 安裝所需

    2024年02月02日
    瀏覽(21)
  • 基于YOLOv5的視頻計(jì)數(shù) — 汽車計(jì)數(shù)實(shí)現(xiàn)

    基于YOLOv5的視頻計(jì)數(shù) — 汽車計(jì)數(shù)實(shí)現(xiàn)

    在視頻中計(jì)數(shù)對(duì)象可能看起來(lái)有挑戰(zhàn)性,但借助Python和OpenCV的強(qiáng)大功能,變得令人意外地易于實(shí)現(xiàn)。在本文中,我們將探討如何使用YOLO(You Only Look Once)目標(biāo)檢測(cè)模型在視頻流或文件中計(jì)數(shù)對(duì)象。我們將該過(guò)程分解為簡(jiǎn)單的步驟,使初學(xué)者能夠輕松跟隨。 本文將分為以下幾

    2024年02月04日
    瀏覽(22)
  • YOLOv5實(shí)現(xiàn)目標(biāo)計(jì)數(shù)

    YOLOv5實(shí)現(xiàn)目標(biāo)計(jì)數(shù)

    本文主要講解如何使用YOLOv5實(shí)現(xiàn)目標(biāo)計(jì)數(shù)。 在detect.py文件中這部分內(nèi)容替換為下面代碼: 原理比較簡(jiǎn)單,就是計(jì)算錨框數(shù)量,每打印一個(gè)框count計(jì)數(shù)+1(但是值得一提的是,這種方法是不區(qū)分類別的,后續(xù)我想辦法按照類進(jìn)行計(jì)數(shù))。其中l(wèi)abel變量記錄需要展示的變量,原來(lái)

    2024年02月11日
    瀏覽(19)
  • yolov5分割+檢測(cè)c++ qt 中部署,以opencv方式(詳細(xì)代碼(全)+復(fù)制可用)

    yolov5分割+檢測(cè)c++ qt 中部署,以opencv方式(詳細(xì)代碼(全)+復(fù)制可用)

    1:版本說(shuō)明: qt 5.12.10 opencv 4.5.3 (yolov5模型部署要求opencv4.5.0) 2:檢測(cè)的代碼 yolo.h yolo.cpp 檢測(cè)的調(diào)用代碼測(cè)試案例 這段調(diào)用的例子,只要把frame 改成你們自己的圖片即可 4:分割的主要代碼 YoloSeg.h YoloSeg.cpp yolov5_seg_utils.h ?yolov5_seg_utils.cpp 分割的調(diào)用代碼測(cè)試案例 ?分割的

    2024年02月03日
    瀏覽(23)
  • OpenCV與AI深度學(xué)習(xí) | 實(shí)戰(zhàn) | 基于YOLOv9和OpenCV實(shí)現(xiàn)車輛跟蹤計(jì)數(shù)(步驟 + 源碼)

    OpenCV與AI深度學(xué)習(xí) | 實(shí)戰(zhàn) | 基于YOLOv9和OpenCV實(shí)現(xiàn)車輛跟蹤計(jì)數(shù)(步驟 + 源碼)

    本文來(lái)源公眾號(hào)“ OpenCV與AI深度學(xué)習(xí) ”,僅用于學(xué)術(shù)分享,侵權(quán)刪,干貨滿滿。 原文鏈接:實(shí)戰(zhàn) | 基于YOLOv9和OpenCV實(shí)現(xiàn)車輛跟蹤計(jì)數(shù)(步驟 + 源碼) ????本文主要介紹使用YOLOv9和OpenCV實(shí)現(xiàn)車輛跟蹤計(jì)數(shù)(步驟 + 源碼)。? ????監(jiān)控?cái)z像頭可以有效地用于各種場(chǎng)景下的車輛

    2024年04月15日
    瀏覽(25)
  • Opencv C++實(shí)現(xiàn)yolov5部署onnx模型完成目標(biāo)檢測(cè)

    頭文件 命名空間 結(jié)構(gòu)體 Net_config 里面存了三個(gè)閾值和模型地址,其中 置信度 ,顧名思義,看檢測(cè)出來(lái)的物體的精準(zhǔn)度。以測(cè)量值為中心,在一定范圍內(nèi),真值出現(xiàn)在該范圍內(nèi)的幾率。 endsWith()函數(shù) 判斷sub是不是s的子串 anchors_640圖像接收數(shù)組 根據(jù)圖像大小,選擇相應(yīng)長(zhǎng)度的

    2024年02月13日
    瀏覽(24)
  • yolov5無(wú)人機(jī)視頻檢測(cè)與計(jì)數(shù)系統(tǒng)(創(chuàng)新點(diǎn)和代碼)

    yolov5無(wú)人機(jī)視頻檢測(cè)與計(jì)數(shù)系統(tǒng)(創(chuàng)新點(diǎn)和代碼)

    標(biāo)題:基于YOLOv5的無(wú)人機(jī)視頻檢測(cè)與計(jì)數(shù)系統(tǒng) 無(wú)人機(jī)技術(shù)的快速發(fā)展和廣泛應(yīng)用給社會(huì)帶來(lái)了巨大的便利,但也帶來(lái)了一系列的安全隱患。為了實(shí)現(xiàn)對(duì)無(wú)人機(jī)的有效管理和監(jiān)控,本文提出了一種基于YOLOv5的無(wú)人機(jī)視頻檢測(cè)與計(jì)數(shù)系統(tǒng)。該系統(tǒng)通過(guò)使用YOLOv5目標(biāo)檢測(cè)算法,能夠

    2024年02月02日
    瀏覽(37)
  • YOLOv5實(shí)現(xiàn)目標(biāo)分類計(jì)數(shù)并顯示在圖像上

    YOLOv5實(shí)現(xiàn)目標(biāo)分類計(jì)數(shù)并顯示在圖像上

    ? ? ? ? 有同學(xué)后臺(tái)私信我,想用YOLOv5實(shí)現(xiàn)目標(biāo)的分類計(jì)數(shù),因此本文將在之前目標(biāo)計(jì)數(shù)博客的基礎(chǔ)上添加一些代碼,實(shí)現(xiàn)分類計(jì)數(shù)。閱讀本文前請(qǐng)先看那篇博客,鏈接如下: YOLOv5實(shí)現(xiàn)目標(biāo)計(jì)數(shù)_Albert_yeager的博客 ? ? ? ? 以coco數(shù)據(jù)集為例,其類別如下(共80類)。注意,每個(gè)

    2024年02月08日
    瀏覽(19)
  • 基于YOLOv5、v7、v8的竹簽計(jì)數(shù)系統(tǒng)的設(shè)計(jì)與實(shí)現(xiàn)

    基于YOLOv5、v7、v8的竹簽計(jì)數(shù)系統(tǒng)的設(shè)計(jì)與實(shí)現(xiàn)

    該系統(tǒng)是一個(gè)綜合型的應(yīng)用,基于PyTorch框架的YOLOv5、YOLOv7和YOLOv8,結(jié)合了Django后端和Vue3前端,為竹簽生成工廠和串串香店鋪提供了一套全面而強(qiáng)大的實(shí)時(shí)監(jiān)測(cè)與分析解決方案。系統(tǒng)主要特色在于實(shí)時(shí)目標(biāo)檢測(cè)和位置追蹤,支持用戶通過(guò)上傳圖片、視頻或攝像頭進(jìn)行推理,實(shí)

    2024年01月23日
    瀏覽(34)
  • 談yolov5車輛識(shí)別

    目錄 **前言** 一、YOLOv5算法簡(jiǎn)介 二、YOLOv5在車輛識(shí)別中的應(yīng)用 1. ?車輛檢測(cè)

    2024年02月04日
    瀏覽(32)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包