国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程

這篇具有很好參考價值的文章主要介紹了yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

一. 準(zhǔn)備好Pytorch模型和yolov5-6.0項目并配置好環(huán)境

首先需要在官網(wǎng)下載yolov5-6.0的項目

1 我們打開yolov的官網(wǎng),Tags選擇6.0版本
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO2. 下載該壓縮包并解壓到工程目錄下
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
3. 我們這里使用pycharm,專門針對python的IDE,用起來非常方便,下載方式就是官網(wǎng)直接下載,用的是社區(qū)版
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
4. 我們需要安裝環(huán)境,這里我推薦安裝Anaconda在電腦上,這是一個非常方便的包管理工具,可以選擇不同版本的python和pip以及基礎(chǔ)的tools工具。這里不多說,直接推薦教程

https://blog.csdn.net/whc18858/article/details/127132558?ops_request_misc=&request_id=&biz_id=102&utm_term=pc%E4%B8%8A%E5%AE%89%E8%A3%85Anconda%E5%B9%B6%E9%85%8D%E7%BD%AEpycharm&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduweb~default-0-127132558.142v86control,239v2insert_chatgpt&spm=1018.2226.3001.4187

  1. 配置項目環(huán)境,上面教程中也已經(jīng)提及了怎么配置解釋器,對于該項目來說,要配置python3.7。
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 等待安裝環(huán)境后,在終端查看是否是在Anaconda的虛擬環(huán)境中,如果是base的話應(yīng)該是沒有進(jìn)入到該項目的虛擬環(huán)境中,這就需要你知道你創(chuàng)建虛擬環(huán)境時候的名字,在右下角也能夠看到
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  3. 這時候需要我們輸入下面進(jìn)入到該虛擬環(huán)境
conda activate yolov5-master

yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
現(xiàn)在我們就進(jìn)入到該虛擬環(huán)境下了,可以進(jìn)行一頓操作了

  1. 然后就是喜聞樂見安裝各種包環(huán)節(jié),這里我們要使用國內(nèi)的源進(jìn)行安裝下載
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
  1. 然后我們需要拿我們自己的pytorch模型小試一手,如果沒問題就下一步了,有問題就百度或者問ai,一般來說不可能有問題
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 結(jié)果輸出,效果還行
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

二. 修改部分項目代碼并轉(zhuǎn)換為onnx模型

  1. 就像網(wǎng)上很多教程說的,想要輸出onnx模型需要修改yolo.py中的代碼,該代碼在models下面
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 這段代碼是用PyTorch實現(xiàn)的目標(biāo)檢測算法中的前向傳播函數(shù)。算法采用的是YOLOv5的變種。主要的思路是對輸入的特征圖進(jìn)行多尺度的卷積和處理,然后把處理結(jié)果拼接在一起得到最終的檢測結(jié)果
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  3. 需要改為下面的代碼:
def forward(self, x):
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
    return x
  1. 還需要修改一下export.py中的配置,其實不改的話也事,只需要在使用的時候加上就行
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 這里我們在終端輸入:
python export.py --weights best.pt --img 640 --batch 1 --opset 12

yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
6. 這里說明我們已經(jīng)轉(zhuǎn)換完成了,可以查看一下該onnx模型的網(wǎng)絡(luò)結(jié)構(gòu),使用Netron
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

這里觀察一下自己模型的輸入輸出是否有問題,這里沒有什么問題,準(zhǔn)備進(jìn)行下一步
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

三. onnx模型轉(zhuǎn)換為rknn模型

  1. 根據(jù)firefly官網(wǎng)的關(guān)于使用NPU的說明,我們需要先下載所需要的包,這里我們使用的是RK_NPU_SDK_1.2.0,這里面幾乎含有了所有我們需要的東西
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
2. 下載好包,我們就需要準(zhǔn)備環(huán)境了。通過firefly官方對于npu的說明,RKNN-Toolkit2只能用在x86 64的ubuntu系統(tǒng)上,版本最好是18.04,也就是說在PC上安裝虛擬機(jī),或者專門找一個x86 64 ubuntu系統(tǒng)的電腦,是不是很折磨。這里我們已經(jīng)找了一臺ubuntu方便用來轉(zhuǎn)換模型,這里我們用vscode遠(yuǎn)程連接該平臺,F(xiàn)ileZilla Client方便將onnx模型文件轉(zhuǎn)入該平臺,這里我已經(jīng)準(zhǔn)備好了,只需要你的onnx,RK_NPU_SDK_1.2.0以及一個素材圖片,為了避免出問題,我建議測試圖片使用640*640的。
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
3. 這里建議是在虛擬環(huán)境中安裝rknn的環(huán)境
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
4. 我們需要安裝RKNN-Toolkit2 工具以及依賴,如何安裝呢,看官網(wǎng)

https://wiki.t-firefly.com/zh_CN/ROC-RK3568-PC-SE/usage_npu.html

  1. 安裝完后就可以開始轉(zhuǎn)換了。目錄如下:
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 需要修改一下路徑,包括onnx的路徑以及將要生成的rknn的路徑名稱,測試圖片的位置
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  3. 填入自己的檢測目標(biāo)類class
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  4. 根據(jù)NPU1.2.0里面doc中的說明,需要修改rknn.config,下面的outputs要刪除,如下圖所示:
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
    根據(jù)自己的情況修改該參數(shù),剛開始只需要填入一個平臺名字即可
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3568', optimization_level=3, quantized_dtype='asymmetric_quantized-8')
  1. 后面把輸出給保存出來
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
end = time.time()
print (end-start,"s")
cv2.imwrite("finalresult.jpg", img_1)
cv2.waitKey(0)
cv2.destroyAllWindows()
rknn.release()
  1. 改完開始運行轉(zhuǎn)換,還能確定onnx模型是否有問題

yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
這里位置沒什么問題,anchor特別大,不過onnx模型沒問題
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

四. 測試rknn模型精度并在qt上部署

  1. opencv多平臺編譯參考我的另一篇RK3568+QT5+OpenCV Debian10母板開發(fā)環(huán)境搭建自記錄

PS:改源后我們只下載一個東西,libjasper-dev 使用sudo apt-get install 來下載安裝,(在后面測試時發(fā)現(xiàn)不安裝編譯的話后續(xù)在qtcreator中build會出現(xiàn)問題)隨后繼續(xù)換回到阿里鏡像,下載編譯opencv的依賴,安裝完后確定一下

要注意,編譯后的opencv我一般也是放在opt下面,并且給opt文件夾 777權(quán)限
2. 參考上面的文檔之后你應(yīng)該已經(jīng)安裝了qtcreator,接下來就需要一個測試程序來測試,這里用江流兒大佬改的代碼測試,

https://blog.csdn.net/sxj731533730/article/details/127029969

我們簡單粗暴的創(chuàng)建一個控制臺程序項目即可,將全部代碼塞到cpp中,并且創(chuàng)建頭文件rknn_api.h,這個文件在RK_NPU_SDK1.2.0里面有
yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO文章來源地址http://www.zghlxwxcb.cn/news/detail-745399.html

  1. 除了必要的rknn_api.h,還需要.so庫支持,庫也在RK_NPU_SDK1.2.0里面,我們需要把它跟板子上的so庫替換且備份,需要跟api版本相對應(yīng)。
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  2. 在pro文件中需要加上我們的編譯后的opencv庫,如下圖
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO
  3. 這里我們放出代碼,這個只能對圖片進(jìn)行檢測,至于視頻或者攝像頭要對該代碼進(jìn)行修改
#include <QCoreApplication>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <queue>
#include "rknn_api.h"
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <chrono>
#define OBJ_NAME_MAX_SIZE 16
#define OBJ_NUMB_MAX_SIZE 200
#define OBJ_CLASS_NUM     10
#define PROP_BOX_SIZE     (5+OBJ_CLASS_NUM)
using namespace std;

typedef struct _BOX_RECT {
    int left;
    int right;
    int top;
    int bottom;
} BOX_RECT;

typedef struct __detect_result_t {
    char name[OBJ_NAME_MAX_SIZE];
    int class_index;
    BOX_RECT box;
    float prop;
} detect_result_t;

typedef struct _detect_result_group_t {
    int id;
    int count;
    detect_result_t results[OBJ_NUMB_MAX_SIZE];
} detect_result_group_t;

//const int anchor0[6] = {10, 13, 16, 30, 33, 23};
//const int anchor1[6] = {30, 61, 62, 45, 59, 119};
//const int anchor2[6] = {116, 90, 156, 198, 373, 326};
const int anchor0[6] = {3, 4, 4, 8, 9, 6};
const int anchor1[6] = {6, 14, 14, 10, 15, 30};
const int anchor2[6] = {29, 23, 39, 50, 94, 82};

void printRKNNTensor(rknn_tensor_attr *attr) {
    printf("index=%d name=%s n_dims=%d dims=[%d %d %d %d] n_elems=%d size=%d "
           "fmt=%d type=%d qnt_type=%d fl=%d zp=%d scale=%f\n",
           attr->index, attr->name, attr->n_dims, attr->dims[3], attr->dims[2],
           attr->dims[1], attr->dims[0], attr->n_elems, attr->size, 0, attr->type,
           attr->qnt_type, attr->fl, attr->zp, attr->scale);
}

float sigmoid(float x) {
    return 1.0 / (1.0 + expf(-x));
}

float unsigmoid(float y) {
    return -1.0 * logf((1.0 / y) - 1.0);
}

int process_fp(float *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
               std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId,
               float threshold) {

    int validCount = 0;
    int grid_len = grid_h * grid_w;
    float thres_sigmoid = unsigmoid(threshold);
    for (int a = 0; a < 3; a++) {
        for (int i = 0; i < grid_h; i++) {
            for (int j = 0; j < grid_w; j++) {
                float box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
                if (box_confidence >= thres_sigmoid) {
                    int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;
                    float *in_ptr = input + offset;
                    float box_x = sigmoid(*in_ptr) * 2.0 - 0.5;
                    float box_y = sigmoid(in_ptr[grid_len]) * 2.0 - 0.5;
                    float box_w = sigmoid(in_ptr[2 * grid_len]) * 2.0;
                    float box_h = sigmoid(in_ptr[3 * grid_len]) * 2.0;
                    box_x = (box_x + j) * (float) stride;
                    box_y = (box_y + i) * (float) stride;
                    box_w = box_w * box_w * (float) anchor[a * 2];
                    box_h = box_h * box_h * (float) anchor[a * 2 + 1];
                    box_x -= (box_w / 2.0);
                    box_y -= (box_h / 2.0);
                    boxes.push_back(box_x);
                    boxes.push_back(box_y);
                    boxes.push_back(box_w);
                    boxes.push_back(box_h);

                    float maxClassProbs = in_ptr[5 * grid_len];
                    int maxClassId = 0;
                    for (int k = 1; k < OBJ_CLASS_NUM; ++k) {
                        float prob = in_ptr[(5 + k) * grid_len];
                        if (prob > maxClassProbs) {
                            maxClassId = k;
                            maxClassProbs = prob;
                        }
                    }
                    float box_conf_f32 = sigmoid(box_confidence);
                    float class_prob_f32 = sigmoid(maxClassProbs);
                    boxScores.push_back(box_conf_f32 * class_prob_f32);
                    classId.push_back(maxClassId);
                    validCount++;
                }
            }
        }
    }
    return validCount;
}

float CalculateOverlap(float xmin0, float ymin0, float xmax0, float ymax0, float xmin1, float ymin1, float xmax1,
                       float ymax1) {
    float w = fmax(0.f, fmin(xmax0, xmax1) - fmax(xmin0, xmin1) + 1.0);
    float h = fmax(0.f, fmin(ymax0, ymax1) - fmax(ymin0, ymin1) + 1.0);
    float i = w * h;
    float u = (xmax0 - xmin0 + 1.0) * (ymax0 - ymin0 + 1.0) + (xmax1 - xmin1 + 1.0) * (ymax1 - ymin1 + 1.0) - i;
    return u <= 0.f ? 0.f : (i / u);
}

int nms(int validCount, std::vector<float> &outputLocations, std::vector<int> &order, float threshold) {
    for (int i = 0; i < validCount; ++i) {
        if (order[i] == -1) {
            continue;
        }
        int n = order[i];
        for (int j = i + 1; j < validCount; ++j) {
            int m = order[j];
            if (m == -1) {
                continue;
            }
            float xmin0 = outputLocations[n * 4 + 0];
            float ymin0 = outputLocations[n * 4 + 1];
            float xmax0 = outputLocations[n * 4 + 0] + outputLocations[n * 4 + 2];
            float ymax0 = outputLocations[n * 4 + 1] + outputLocations[n * 4 + 3];

            float xmin1 = outputLocations[m * 4 + 0];
            float ymin1 = outputLocations[m * 4 + 1];
            float xmax1 = outputLocations[m * 4 + 0] + outputLocations[m * 4 + 2];
            float ymax1 = outputLocations[m * 4 + 1] + outputLocations[m * 4 + 3];

            float iou = CalculateOverlap(xmin0, ymin0, xmax0, ymax0, xmin1, ymin1, xmax1, ymax1);

            if (iou > threshold) {
                order[j] = -1;
            }
        }
    }
    return 0;
}

int quick_sort_indice_inverse(
        std::vector<float> &input,
        int left,
        int right,
        std::vector<int> &indices) {
    float key;
    int key_index;
    int low = left;
    int high = right;
    if (left < right) {
        key_index = indices[left];
        key = input[left];
        while (low < high) {
            while (low < high && input[high] <= key) {
                high--;
            }
            input[low] = input[high];
            indices[low] = indices[high];
            while (low < high && input[low] >= key) {
                low++;
            }
            input[high] = input[low];
            indices[high] = indices[low];
        }
        input[low] = key;
        indices[low] = key_index;
        quick_sort_indice_inverse(input, left, low - 1, indices);
        quick_sort_indice_inverse(input, low + 1, right, indices);
    }
    return low;
}

int clamp(float val, int min, int max) {
    return val > min ? (val < max ? val : max) : min;
}

int post_process_fp(float *input0, float *input1, float *input2, int model_in_h, int model_in_w,
                    int h_offset, int w_offset, float resize_scale, float conf_threshold, float nms_threshold,
                    detect_result_group_t *group, const char *labels[]) {
    memset(group, 0, sizeof(detect_result_group_t));
    std::vector<float> filterBoxes;
    std::vector<float> boxesScore;
    std::vector<int> classId;
    int stride0 = 8;
    int grid_h0 = model_in_h / stride0;
    int grid_w0 = model_in_w / stride0;
    int validCount0 = 0;
    validCount0 = process_fp(input0, (int *) anchor0, grid_h0, grid_w0, model_in_h, model_in_w,
                             stride0, filterBoxes, boxesScore, classId, conf_threshold);

    int stride1 = 16;
    int grid_h1 = model_in_h / stride1;
    int grid_w1 = model_in_w / stride1;
    int validCount1 = 0;
    validCount1 = process_fp(input1, (int *) anchor1, grid_h1, grid_w1, model_in_h, model_in_w,
                             stride1, filterBoxes, boxesScore, classId, conf_threshold);

    int stride2 = 32;
    int grid_h2 = model_in_h / stride2;
    int grid_w2 = model_in_w / stride2;
    int validCount2 = 0;
    validCount2 = process_fp(input2, (int *) anchor2, grid_h2, grid_w2, model_in_h, model_in_w,
                             stride2, filterBoxes, boxesScore, classId, conf_threshold);

    int validCount = validCount0 + validCount1 + validCount2;
    // no object detect
    if (validCount <= 0) {
        return 0;
    }

    std::vector<int> indexArray;
    for (int i = 0; i < validCount; ++i) {
        indexArray.push_back(i);
    }

    quick_sort_indice_inverse(boxesScore, 0, validCount - 1, indexArray);

    nms(validCount, filterBoxes, indexArray, nms_threshold);

    int last_count = 0;
    /* box valid detect target */
    for (int i = 0; i < validCount; ++i) {

        if (indexArray[i] == -1 || boxesScore[i] < conf_threshold || last_count >= OBJ_NUMB_MAX_SIZE) {
            continue;
        }
        int n = indexArray[i];

        float x1 = filterBoxes[n * 4 + 0];
        float y1 = filterBoxes[n * 4 + 1];
        float x2 = x1 + filterBoxes[n * 4 + 2];
        float y2 = y1 + filterBoxes[n * 4 + 3];
        int id = classId[n];

        group->results[last_count].box.left = (int) ((clamp(x1, 0, model_in_w) - w_offset) / resize_scale);
        group->results[last_count].box.top = (int) ((clamp(y1, 0, model_in_h) - h_offset) / resize_scale);
        group->results[last_count].box.right = (int) ((clamp(x2, 0, model_in_w) - w_offset) / resize_scale);
        group->results[last_count].box.bottom = (int) ((clamp(y2, 0, model_in_h) - h_offset) / resize_scale);
        group->results[last_count].prop = boxesScore[i];
        group->results[last_count].class_index = id;
        const char *label = labels[id];
        strncpy(group->results[last_count].name, label, OBJ_NAME_MAX_SIZE);

        // printf("result %2d: (%4d, %4d, %4d, %4d), %s\n", i, group->results[last_count].box.left, group->results[last_count].box.top,
        //        group->results[last_count].box.right, group->results[last_count].box.bottom, label);
        last_count++;
    }
    group->count = last_count;

    return 0;
}

float deqnt_affine_to_f32(uint8_t qnt, uint8_t zp, float scale) {
    return ((float) qnt - (float) zp) * scale;
}

int32_t __clip(float val, float min, float max) {
    float f = val <= min ? min : (val >= max ? max : val);
    return f;
}

uint8_t qnt_f32_to_affine(float f32, uint8_t zp, float scale) {
    float dst_val = (f32 / scale) + zp;
    uint8_t res = (uint8_t) __clip(dst_val, 0, 255);
    return res;
}

int process_u8(uint8_t *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
               std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId,
               float threshold, uint8_t zp, float scale) {

    int validCount = 0;
    int grid_len = grid_h * grid_w;
    float thres = unsigmoid(threshold);
    uint8_t thres_u8 = qnt_f32_to_affine(thres, zp, scale);
    for (int a = 0; a < 3; a++) {
        for (int i = 0; i < grid_h; i++) {
            for (int j = 0; j < grid_w; j++) {
                uint8_t box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
                if (box_confidence >= thres_u8) {
                    int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;
                    uint8_t *in_ptr = input + offset;
                    float box_x = sigmoid(deqnt_affine_to_f32(*in_ptr, zp, scale)) * 2.0 - 0.5;
                    float box_y = sigmoid(deqnt_affine_to_f32(in_ptr[grid_len], zp, scale)) * 2.0 - 0.5;
                    float box_w = sigmoid(deqnt_affine_to_f32(in_ptr[2 * grid_len], zp, scale)) * 2.0;
                    float box_h = sigmoid(deqnt_affine_to_f32(in_ptr[3 * grid_len], zp, scale)) * 2.0;
                    box_x = (box_x + j) * (float) stride;
                    box_y = (box_y + i) * (float) stride;
                    box_w = box_w * box_w * (float) anchor[a * 2];
                    box_h = box_h * box_h * (float) anchor[a * 2 + 1];
                    box_x -= (box_w / 2.0);
                    box_y -= (box_h / 2.0);
                    boxes.push_back(box_x);
                    boxes.push_back(box_y);
                    boxes.push_back(box_w);
                    boxes.push_back(box_h);

                    uint8_t maxClassProbs = in_ptr[5 * grid_len];
                    int maxClassId = 0;
                    for (int k = 1; k < OBJ_CLASS_NUM; ++k) {
                        uint8_t prob = in_ptr[(5 + k) * grid_len];
                        if (prob > maxClassProbs) {
                            maxClassId = k;
                            maxClassProbs = prob;
                        }
                    }
                    float box_conf_f32 = sigmoid(deqnt_affine_to_f32(box_confidence, zp, scale));
                    float class_prob_f32 = sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale));
                    boxScores.push_back(box_conf_f32 * class_prob_f32);
                    classId.push_back(maxClassId);
                    validCount++;
                }
            }
        }
    }
    return validCount;
}

int post_process_u8(uint8_t *input0, uint8_t *input1, uint8_t *input2, int model_in_h, int model_in_w,
                    int h_offset, int w_offset, float resize_scale, float conf_threshold, float nms_threshold,
                    std::vector<uint8_t> &qnt_zps, std::vector<float> &qnt_scales,
                    detect_result_group_t *group, const char *labels[]) {

    memset(group, 0, sizeof(detect_result_group_t));

    std::vector<float> filterBoxes;
    std::vector<float> boxesScore;
    std::vector<int> classId;
    int stride0 = 8;
    int grid_h0 = model_in_h / stride0;
    int grid_w0 = model_in_w / stride0;
    int validCount0 = 0;
    validCount0 = process_u8(input0, (int *) anchor0, grid_h0, grid_w0, model_in_h, model_in_w,
                             stride0, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[0], qnt_scales[0]);

    int stride1 = 16;
    int grid_h1 = model_in_h / stride1;
    int grid_w1 = model_in_w / stride1;
    int validCount1 = 0;
    validCount1 = process_u8(input1, (int *) anchor1, grid_h1, grid_w1, model_in_h, model_in_w,
                             stride1, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[1], qnt_scales[1]);

    int stride2 = 32;
    int grid_h2 = model_in_h / stride2;
    int grid_w2 = model_in_w / stride2;
    int validCount2 = 0;
    validCount2 = process_u8(input2, (int *) anchor2, grid_h2, grid_w2, model_in_h, model_in_w,
                             stride2, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[2], qnt_scales[2]);

    int validCount = validCount0 + validCount1 + validCount2;
    // no object detect
    if (validCount <= 0) {
        return 0;
    }

    std::vector<int> indexArray;
    for (int i = 0; i < validCount; ++i) {
        indexArray.push_back(i);
    }

    quick_sort_indice_inverse(boxesScore, 0, validCount - 1, indexArray);

    nms(validCount, filterBoxes, indexArray, nms_threshold);

    int last_count = 0;
    group->count = 0;
    /* box valid detect target */
    for (int i = 0; i < validCount; ++i) {

        if (indexArray[i] == -1 || boxesScore[i] < conf_threshold || last_count >= OBJ_NUMB_MAX_SIZE) {
            continue;
        }
        int n = indexArray[i];

        float x1 = filterBoxes[n * 4 + 0];
        float y1 = filterBoxes[n * 4 + 1];
        float x2 = x1 + filterBoxes[n * 4 + 2];
        float y2 = y1 + filterBoxes[n * 4 + 3];
        int id = classId[n];

        group->results[last_count].box.left = (int) ((clamp(x1, 0, model_in_w) - w_offset) / resize_scale);
        group->results[last_count].box.top = (int) ((clamp(y1, 0, model_in_h) - h_offset) / resize_scale);
        group->results[last_count].box.right = (int) ((clamp(x2, 0, model_in_w) - w_offset) / resize_scale);
        group->results[last_count].box.bottom = (int) ((clamp(y2, 0, model_in_h) - h_offset) / resize_scale);
        group->results[last_count].prop = boxesScore[i];
        group->results[last_count].class_index = id;
        const char *label = labels[id];
        strncpy(group->results[last_count].name, label, OBJ_NAME_MAX_SIZE);

        // printf("result %2d: (%4d, %4d, %4d, %4d), %s\n", i, group->results[last_count].box.left, group->results[last_count].box.top,
        //        group->results[last_count].box.right, group->results[last_count].box.bottom, label);
        last_count++;
    }
    group->count = last_count;

    return 0;
}
void letterbox(cv::Mat rgb,cv::Mat &img_resize,int target_width,int target_height){

    float shape_0=rgb.rows;
    float shape_1=rgb.cols;
    float new_shape_0=target_height;
    float new_shape_1=target_width;
    float r=std::min(new_shape_0/shape_0,new_shape_1/shape_1);
    float new_unpad_0=int(round(shape_1*r));
    float new_unpad_1=int(round(shape_0*r));
    float dw=new_shape_1-new_unpad_0;
    float dh=new_shape_0-new_unpad_1;
    dw=dw/2;
    dh=dh/2;
    cv::Mat copy_rgb=rgb.clone();
    if(int(shape_0)!=int(new_unpad_0)&&int(shape_1)!=int(new_unpad_1)){
        cv::resize(copy_rgb,img_resize,cv::Size(new_unpad_0,new_unpad_1));
        copy_rgb=img_resize;
    }
    int top=int(round(dh-0.1));
    int bottom=int(round(dh+0.1));
    int left=int(round(dw-0.1));
    int right=int(round(dw+0.1));
    cv::copyMakeBorder(copy_rgb, img_resize,top, bottom, left, right, cv::BORDER_CONSTANT, cv::Scalar(0,0,0));

}
int main(int argc, char **argv) {
    const char *img_path = "/opt/testPictures/test4.jpg";
    //const char *img_path = "/opt/personCar/002.jpg";
    const char *model_path = "/opt/model/RK356X/best.rknn";
    const char *post_process_type = "fp";//fp
    const int target_width = 640;
    const int target_height = 640;
    const char *image_process_mode = "letter_box";
    float resize_scale = 0;
    int h_pad=0;
    int w_pad=0;

    const float nms_threshold = 0.2;
    const float conf_threshold = 0.3;

//    const char *labels[] = {"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat",
//                            "traffic light",
//                            "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
//                            "sheep", "cow",
//                            "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie",
//                            "suitcase", "frisbee",
//                            "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove",
//                            "skateboard", "surfboard",
//                            "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl",
//                            "banana", "apple",
//                            "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake",
//                            "chair", "couch",
//                            "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote",
//                            "keyboard", "cell phone",
//                            "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase",
//                            "scissors", "teddy bear",
//                            "hair drier", "toothbrush"};

    const char *labels[] = {"pedestrian", "people", "bicycle", "car", "van", "truck", "tricycle", "awning-tricycle", "bus", "motor"};


    // Load image
    cv::Mat bgr = cv::imread(img_path);
    if (!bgr.data) {
        printf("cv::imread %s fail!\n", img_path);
        return -1;
    }
    cv::Mat rgb;
    //BGR->RGB
    cv::cvtColor(bgr, rgb, cv::COLOR_BGR2RGB);

    cv::Mat img_resize;
    float correction[2] = {0, 0};
    float scale_factor[] = {0, 0};
    int width=rgb.cols;
    int height=rgb.rows;
    // Letter box resize
    float img_wh_ratio = (float) width / (float) height;
    float input_wh_ratio = (float) target_width / (float) target_height;
    int resize_width;
    int resize_height;
    if (img_wh_ratio >= input_wh_ratio) {
        //pad height dim
        resize_scale = (float) target_width / (float) width;
        resize_width = target_width;
        resize_height = (int) ((float) height * resize_scale);
        w_pad = 0;
        h_pad = (target_height - resize_height) / 2;
    } else {
        //pad width dim
        resize_scale = (float) target_height / (float) height;
        resize_width = (int) ((float) width * resize_scale);
        resize_height = target_height;
        w_pad = (target_width - resize_width) / 2;;
        h_pad = 0;
    }
    if(strcmp(image_process_mode,"letter_box")==0){
        letterbox(rgb,img_resize,target_width,target_height);
    }else {
        cv::resize(rgb, img_resize, cv::Size(target_width, target_height));
    }
    // Load model
    FILE *fp = fopen(model_path, "rb");
    if (fp == NULL) {
        printf("fopen %s fail!\n", model_path);
        return -1;
    }
    fseek(fp, 0, SEEK_END);
    int model_len = ftell(fp);
    void *model = malloc(model_len);
    fseek(fp, 0, SEEK_SET);
    if (model_len != fread(model, 1, model_len, fp)) {
        printf("fread %s fail!\n", model_path);
        free(model);
        return -1;
    }


    rknn_context ctx = 0;

    int ret = rknn_init(&ctx, model, model_len, 0,0);
    if (ret < 0) {
        printf("rknn_init fail! ret=%d\n", ret);
        return -1;
    }

    /* Query sdk version */
    rknn_sdk_version version;
    ret = rknn_query(ctx, RKNN_QUERY_SDK_VERSION, &version,
                     sizeof(rknn_sdk_version));
    if (ret < 0) {
        printf("rknn_init error ret=%d\n", ret);
        return -1;
    }
    printf("sdk version: %s driver version: %s\n", version.api_version,
           version.drv_version);


    /* Get input,output attr */
    rknn_input_output_num io_num;
    ret = rknn_query(ctx, RKNN_QUERY_IN_OUT_NUM, &io_num, sizeof(io_num));
    if (ret < 0) {
        printf("rknn_init error ret=%d\n", ret);
        return -1;
    }
    printf("model input num: %d, output num: %d\n", io_num.n_input,
           io_num.n_output);

    rknn_tensor_attr input_attrs[io_num.n_input];
    memset(input_attrs, 0, sizeof(input_attrs));
    for (int i = 0; i < io_num.n_input; i++) {
        input_attrs[i].index = i;
        ret = rknn_query(ctx, RKNN_QUERY_INPUT_ATTR, &(input_attrs[i]),
                         sizeof(rknn_tensor_attr));
        if (ret < 0) {
            printf("rknn_init error ret=%d\n", ret);
            return -1;
        }
        printRKNNTensor(&(input_attrs[i]));
    }

    rknn_tensor_attr output_attrs[io_num.n_output];
    memset(output_attrs, 0, sizeof(output_attrs));
    for (int i = 0; i < io_num.n_output; i++) {
        output_attrs[i].index = i;
        ret = rknn_query(ctx, RKNN_QUERY_OUTPUT_ATTR, &(output_attrs[i]),
                         sizeof(rknn_tensor_attr));
        printRKNNTensor(&(output_attrs[i]));
    }

    int input_channel = 3;
    int input_width = 0;
    int input_height = 0;
    if (input_attrs[0].fmt == RKNN_TENSOR_NCHW) {
        printf("model is NCHW input fmt\n");
        input_width = input_attrs[0].dims[0];
        input_height = input_attrs[0].dims[1];
        printf("input_width=%d input_height=%d\n", input_width, input_height);
    } else {
        printf("model is NHWC input fmt\n");
        input_width = input_attrs[0].dims[1];
        input_height = input_attrs[0].dims[2];
        printf("input_width=%d input_height=%d\n", input_width, input_height);
    }

    printf("model input height=%d, width=%d, channel=%d\n", input_height, input_width,
           input_channel);


/* Init input tensor */
    rknn_input inputs[1];
    memset(inputs, 0, sizeof(inputs));
    inputs[0].index = 0;
    inputs[0].buf = img_resize.data;
    inputs[0].type = RKNN_TENSOR_UINT8;
    inputs[0].size = input_width * input_height * input_channel;
    inputs[0].fmt = RKNN_TENSOR_NHWC;
    inputs[0].pass_through = 0;

    /* Init output tensor */
    rknn_output outputs[io_num.n_output];
    memset(outputs, 0, sizeof(outputs));
    for (int i = 0; i < io_num.n_output; i++) {
        if (strcmp(post_process_type, "fp") == 0) {
            outputs[i].want_float = 1;
        } else if (strcmp(post_process_type, "u8") == 0) {
            outputs[i].want_float = 0;
        }
    }
    printf("img.cols: %d, img.rows: %d\n", img_resize.cols, img_resize.rows);
    auto t1=std::chrono::steady_clock::now();
    rknn_inputs_set(ctx, io_num.n_input, inputs);
    ret = rknn_run(ctx, NULL);
    if (ret < 0) {
        printf("ctx error ret=%d\n", ret);
        return -1;
    }
    ret = rknn_outputs_get(ctx, io_num.n_output, outputs, NULL);
    if (ret < 0) {
        printf("outputs error ret=%d\n", ret);
        return -1;
    }
    /* Post process */
    std::vector<float> out_scales;
    std::vector<uint8_t> out_zps;
    for (int i = 0; i < io_num.n_output; ++i) {
        out_scales.push_back(output_attrs[i].scale);
        out_zps.push_back(output_attrs[i].zp);
    }

    detect_result_group_t detect_result_group;
    if (strcmp(post_process_type, "u8") == 0) {
        post_process_u8((uint8_t *) outputs[0].buf, (uint8_t *) outputs[1].buf, (uint8_t *) outputs[2].buf,
                        input_height, input_width,
                        h_pad, w_pad, resize_scale, conf_threshold, nms_threshold, out_zps, out_scales,
                        &detect_result_group, labels);
    } else if (strcmp(post_process_type, "fp") == 0) {
        post_process_fp((float *) outputs[0].buf, (float *) outputs[1].buf, (float *) outputs[2].buf, input_height,
                        input_width,
                        h_pad, w_pad, resize_scale, conf_threshold, nms_threshold, &detect_result_group, labels);
    }
//毫秒級
    auto t2=std::chrono::steady_clock::now();
    double dr_ms=std::chrono::duration<double,std::milli>(t2-t1).count();
    printf("%lf ms\n",dr_ms);


    for (int i = 0; i < detect_result_group.count; i++) {
        detect_result_t *det_result = &(detect_result_group.results[i]);
        printf("%s @ (%d %d %d %d) %f\n",
               det_result->name,
               det_result->box.left, det_result->box.top, det_result->box.right, det_result->box.bottom,
               det_result->prop);
        int bx1 = det_result->box.left;
        int by1 = det_result->box.top;
        int bx2 = det_result->box.right;
        int by2 = det_result->box.bottom;
        cv::rectangle(bgr, cv::Point(bx1, by1), cv::Point(bx2, by2), cv::Scalar(231, 232, 143));  //兩點的方式
        char text[256];
        sprintf(text, "%s %.1f%% ", det_result->name, det_result->prop * 100);

        int baseLine = 0;
        cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);

        int x = bx1;
        int y = by1 - label_size.height - baseLine;
        if (y < 0)
            y = 0;
        if (x + label_size.width > bgr.cols)
            x = bgr.cols - label_size.width;


        cv::rectangle(bgr, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),
                      cv::Scalar(0, 0, 255), -1);

        cv::putText(bgr, text, cv::Point(x, y + label_size.height),
                    cv::FONT_HERSHEY_DUPLEX, 0.4, cv::Scalar(255, 255, 255), 1, cv::LINE_AA);

        cv::imwrite("bgr9.jpg", bgr);
    }


    ret = rknn_outputs_release(ctx, io_num.n_output, outputs);

    if (ret < 0) {
        printf("rknn_query fail! ret=%d\n", ret);
        goto Error;
    }


    Error:
    if (ctx > 0)
        rknn_destroy(ctx);
    if (model)
        free(model);
    if (fp)
        fclose(fp);
    return 0;
}
  1. 查看一下效果:
    yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程,RK3568,pytorch,linux,YOLO

到了這里,關(guān)于yolov5-6.0項目部署+自用Pytorch模型轉(zhuǎn)換rknn模型并在RK3568 linux(Debian)平臺上使用qt部署使用NPU推理加速攝像頭目標(biāo)識別詳細(xì)新手教程的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • 香橙派5 RK3588 yolov5模型轉(zhuǎn)換rknn及部署踩坑全記錄 orangepi 5

    香橙派5 RK3588 yolov5模型轉(zhuǎn)換rknn及部署踩坑全記錄 orangepi 5

    由于距離寫這篇文章過去很久,有的部分,官方已更新,請多結(jié)合其他人的看,并多琢磨、討論~ 另外打個小廣告: 博客 https://blog.vrxiaojie.top/ 歡迎大家前來做客玩耍,提出問題~~ 以后的文章都會在博客發(fā)布了,CSDN這邊可能這是最后一篇文章。 (1) 使用官方提供的Ubuntu鏡像:

    2024年02月05日
    瀏覽(28)
  • 基于YOLOv5的WiderFace人臉檢測檢測系統(tǒng)(PyTorch+Pyside6+YOLOv5模型)

    基于YOLOv5的WiderFace人臉檢測檢測系統(tǒng)(PyTorch+Pyside6+YOLOv5模型)

    摘要:基于YOLOv5的WiderFace人臉檢測系統(tǒng)可用于日常生活中檢測與定位人臉目標(biāo),利用深度學(xué)習(xí)算法可實現(xiàn)圖片、視頻、攝像頭等方式的人臉目標(biāo)檢測識別,另外支持結(jié)果可視化與圖片或視頻檢測結(jié)果的導(dǎo)出。本系統(tǒng)采用YOLOv5目標(biāo)檢測模型訓(xùn)練數(shù)據(jù)集,使用Pysdie6庫來搭建頁面展

    2024年02月15日
    瀏覽(22)
  • 樹莓派部署YOLOv5模型

    本文章是關(guān)于樹莓派部署YOLOv5s模型,實際測試效果的FPS僅有0.15,不夠滿足實際檢測需要,各位大佬可以參考參考。 1、在樹莓派中安裝opencv(默認(rèn)安裝好python3) 2、導(dǎo)出onnx模型 從YOLOv5官網(wǎng)下載源代碼和YOLOv5s.pt文件 YOLOv5官網(wǎng) YOLOv5s.pt下載 按照作者提示安裝環(huán)境,使用它自帶

    2024年02月11日
    瀏覽(22)
  • 【pytorch】目標(biāo)檢測:一文搞懂如何利用kaggle訓(xùn)練yolov5模型

    【pytorch】目標(biāo)檢測:一文搞懂如何利用kaggle訓(xùn)練yolov5模型

    筆者的運行環(huán)境:python3.8+pytorch2.0.1+pycharm+kaggle。 yolov5對python和pytorch版本是有要求的,python=3.8,pytorch=1.6。yolov5共有5種類型nslmx,參數(shù)量依次遞增,對訓(xùn)練設(shè)備的要求也是遞增。本文以yolov5_6s為切入點,探究yolov5如何在實戰(zhàn)種運用。 roboflow是一個公開數(shù)據(jù)集網(wǎng)站,里面有很

    2024年02月12日
    瀏覽(32)
  • 用C++部署yolov5模型

    要在C語言中部署YoloV5模型,可以使用以下步驟: 安裝C語言的深度學(xué)習(xí)庫,例如Darknet或者ncnn。 下載訓(xùn)練好的YoloV5模型權(quán)重文件(.pt文件)和模型配置文件(.yaml文件)。 將下載的權(quán)重文件和配置文件移動到C語言深度學(xué)習(xí)庫中指定的目錄下。 在C語言中編寫代碼,使用深度學(xué)習(xí)庫加

    2024年02月15日
    瀏覽(23)
  • Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(二)

    Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(二)

    第一章? Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(一) 第二章? Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(二) 目錄 系列文章目錄 前言 一、Netron查看網(wǎng)絡(luò)結(jié)構(gòu) 二、與開發(fā)板建立通信 1.設(shè)置主機(jī) 2.設(shè)置開發(fā)板 三、C++ API編寫 四、編譯運行 總結(jié) 第一章已經(jīng)詳細(xì)介

    2024年02月02日
    瀏覽(42)
  • Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(一)

    Vitis-AI量化編譯YOLOv5(Pytorch框架)并部署ZCU104(一)

    文章目錄 前言 ?一、Vitis-AI Pytorch框架量化(vai_q_pytorch) ?二、編寫量化腳本并進(jìn)行量化 ?三、模型編譯 總結(jié) 許多私信想要源碼工程,近期會上傳到Github倉庫,大家有需要自己拿就就可以。 https://github.com/sususjysjy/Vitis-ai-zcu104-yolov5 -------------------------------------------- ? ? ? ? 雖

    2024年02月02日
    瀏覽(15)
  • fastdeploy快速部署yolov5離線模型

    fastdeploy快速部署yolov5離線模型

    本篇主要是介紹了yolov5模型的快速部署,使用過yolov5訓(xùn)練過的兄弟都知道,訓(xùn)練完之后,無論你的模型是如何導(dǎo)出的,最后想要使用導(dǎo)出的模型,可能還脫離不了yolov5框架,因為,在使用導(dǎo)出的模型前,yolov5對輸入層和輸出層都做了較多的圖像處理,導(dǎo)致,最后要么是調(diào)用

    2024年02月09日
    瀏覽(26)
  • 基于深度學(xué)習(xí)的CCPD車牌檢測系統(tǒng)(PyTorch+Pyside6+YOLOv5模型)

    基于深度學(xué)習(xí)的CCPD車牌檢測系統(tǒng)(PyTorch+Pyside6+YOLOv5模型)

    摘要:基于CCPD數(shù)據(jù)集的高精度車牌檢測系統(tǒng)可用于日常生活中檢測與定位車牌目標(biāo),利用深度學(xué)習(xí)算法可實現(xiàn)圖片、視頻、攝像頭等方式的車牌目標(biāo)檢測識別,另外支持結(jié)果可視化與圖片或視頻檢測結(jié)果的導(dǎo)出。本系統(tǒng)采用YOLOv5目標(biāo)檢測模型訓(xùn)練數(shù)據(jù)集,使用Pysdie6庫來搭建

    2024年02月14日
    瀏覽(37)
  • 【TensorRT】TensorRT 部署Yolov5模型(C++)

    【TensorRT】TensorRT 部署Yolov5模型(C++)

    ? 該項目代碼在本人GitHub代碼倉庫開源,本人GitHub主頁為:GitHub ? 項目代碼: ? NVIDIA TensorRT? 是用于高性能深度學(xué)習(xí)推理的 SDK,可為深度學(xué)習(xí)推理應(yīng)用提供低延遲和高吞吐量。詳細(xì)安裝方式參考以下博客: NVIDIA TensorRT 安裝 (Windows C++) ? 經(jīng)典的一個TensorRT部署模型步驟為

    2023年04月26日
    瀏覽(22)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包