虛擬環(huán)境搭建
首先在PC的ubuntu系統(tǒng)安裝虛擬環(huán)境:
我的服務(wù)器是ubuntu18.04版本,所以安裝python3.6
conda create -n ok3588 python=3.6
需要鍵盤輸入y,然后完成虛擬環(huán)境安裝。
其他系統(tǒng)的對(duì)應(yīng)關(guān)系:
Ubuntu 18.04 python 3.6 / Ubuntu 20.04 python 3.8 / Ubuntu 22.04 python 3.10
軟件包安裝
進(jìn)入虛擬環(huán)境
conda activate ok3588
首先安裝正確的pip版本
curl https://bootstrap.pypa.io/pip/3.6/get-pip.py | python -
下載github 項(xiàng)目
git clone https://github.com/rockchip-linux/rknn-toolkit2
cd rknn-toolkit2/doc
pip install -r requirements_cp36-1.5.0.txt -i https://mirror.baidu.com/pypi/simple
cd rknn-toolkit2/packages
pip install rknn_toolkit2-1.5.0+1fa95b5c-cp36-cp36m-linux_x86_64.whl文章來源:http://www.zghlxwxcb.cn/news/detail-601441.html
在PC機(jī)上運(yùn)行yolov5目標(biāo)檢測(cè)
cd rknn-toolkit2/examples/onnx/yolov5
python test.py
截圖如下
推理前的圖片:
推理后加上box的圖片:
推理代碼和注釋:文章來源地址http://www.zghlxwxcb.cn/news/detail-601441.html
if __name__ == '__main__':
# Create RKNN object
rknn = RKNN(verbose=True)
# pre-process config 配置數(shù)據(jù)參數(shù)
print('--> Config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
print('done')
# Load ONNX model 導(dǎo)入模型
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# Build model 創(chuàng)建模型
print('--> Building model')
ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)
if ret != 0:
print('Build model failed!')
exit(ret)
print('done')
# Export RKNN model 導(dǎo)出RKNN模型
print('--> Export rknn model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
print('Export rknn model failed!')
exit(ret)
print('done')
# Init runtime environment 初始化runtime環(huán)境
print('--> Init runtime environment')
ret = rknn.init_runtime()
# ret = rknn.init_runtime('rk3588')
if ret != 0:
print('Init runtime environment failed!')
exit(ret)
print('done')
# Set inputs
img = cv2.imread(IMG_PATH)
# img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
# Inference 模型推理
print('--> Running model')
outputs = rknn.inference(inputs=[img])
np.save('./onnx_yolov5_0.npy', outputs[0])
np.save('./onnx_yolov5_1.npy', outputs[1])
np.save('./onnx_yolov5_2.npy', outputs[2])
print('done')
# post process
input0_data = outputs[0]
input1_data = outputs[1]
input2_data = outputs[2]
input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))
input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))
input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))
input_data = list()
input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))
boxes, classes, scores = yolov5_post_process(input_data)#識(shí)別結(jié)果后處理
img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
if boxes is not None:
draw(img_1, boxes, scores, classes)
cv2.imwrite('result.jpg', img_1)
rknn.release()
到了這里,關(guān)于使用rknn-toolkit2把YOLOV5部署到OK3588上的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!