一、主機(jī)模型轉(zhuǎn)換
我們依舊采用FastDeploy來(lái)部署應(yīng)用深度學(xué)習(xí)模型到OK3588板卡上
進(jìn)入主機(jī)Ubuntu的虛擬環(huán)境
conda activate ok3588
安裝rknn-toolkit2(該工具不能在OK3588板卡上完成模型轉(zhuǎn)換)
git clone https://github.com/rockchip-linux/rknn-toolkit2
cd rknn-toolkit2
注意這里需要1.4的版本
git checkout v1.4.0 -f
cd packages
pip install rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl
下載FastDeploy
git clone https://github.com/PaddlePaddle/FastDeploy
cd FastDeploy/examples/vision/ocr/PP-OCR
下載PP-OCRv3文字檢測(cè)模型
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
tar -xvf ch_PP-OCRv3_det_infer.tar
下載文字方向分類(lèi)器模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
下載PP-OCRv3文字識(shí)別模型
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
tar -xvf ch_PP-OCRv3_rec_infer.tar
安裝模型轉(zhuǎn)換工具
pip install paddle2onnx
pip install pyyaml
paddle2onnx --model_dir ch_PP-OCRv3_det_infer \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--save_file ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx \
--enable_dev_version True
paddle2onnx --model_dir ch_ppocr_mobile_v2.0_cls_infer \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--save_file ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
--enable_dev_version True
paddle2onnx --model_dir ch_PP-OCRv3_rec_infer \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--save_file ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer.onnx \
--enable_dev_version True
固定模型的輸入shape
python -m paddle2onnx.optimize --input_model ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx \
--output_model ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx \
--input_shape_dict "{'x':[1,3,960,960]}"
python -m paddle2onnx.optimize --input_model ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
--output_model ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
--input_shape_dict "{'x':[1,3,48,192]}"
python -m paddle2onnx.optimize --input_model ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer.onnx \
--output_model ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer.onnx \
--input_shape_dict "{'x':[1,3,48,320]}"
轉(zhuǎn)換成RKNN模型
python rockchip/rknpu2_tools/export.py --config_path tools/rknpu2/config/ppocrv3_det.yaml \
--target_platform rk3588
python rockchip/rknpu2_tools/export.py --config_path tools/rknpu2/config/ppocrv3_rec.yaml \
--target_platform rk3588
python rockchip/rknpu2_tools/export.py --config_path tools/rknpu2/config/ppocrv3_cls.yaml \
--target_platform rk3588
這時(shí)生成了三個(gè)可以部署在OK3588上的模型文件
ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn
ch_PP-OCRv3_rec_infer_rk3588_unquantized.rknn
ch_PP-OCRv3_det_infer_rk3588_unquantized.rknn
把這三個(gè)文件傳輸?shù)絆K3588板卡上
二、板卡模型部署
進(jìn)入虛擬環(huán)境
conda activate ok3588
cd FastDeploy/examples/vision/ocr/PP-OCR/rockchip/cpp
mkdir build
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=/home/forlinx/FastDeploy/build/fastdeploy-0.0.0/
make -j
得到了編譯后的文件 infer_demo
三、執(zhí)行推理
下載圖片和字典文件
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
拷貝RKNN模型到build目錄
三個(gè)模型文件
ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn
ch_PP-OCRv3_rec_infer_rk3588_unquantized.rknn
ch_PP-OCRv3_det_infer_rk3588_unquantized.rknn
放在build文件夾里面
RKNPU推理
./infer_demo ./ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer_rk3588_unquantized.rknn \
./ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn \
./ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer_rk3588_unquantized.rknn \
./ppocr_keys_v1.txt \
./12.jpg \
1
推理結(jié)果展示
文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-742832.html
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-742832.html
到了這里,關(guān)于在OK3588板卡上部署模型實(shí)現(xiàn)人工智能OCR應(yīng)用(十一)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!