国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

mmdeploy框架轉(zhuǎn)化模型

這篇具有很好參考價(jià)值的文章主要介紹了mmdeploy框架轉(zhuǎn)化模型。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

1 從源碼編譯

參考鏈接
reference2

git clone -b main https://github.com/open-mmlab/mmdeploy.git --recursive
cd mmdeploy

1.1 安裝onnxruntime后端

python 的onnx后端安裝.

pip install onnxruntime==1.8.1

注: 在另外的機(jī)器上python3.11.5版本的環(huán)境下沒(méi)有onnxruntime==1.8.1,安裝了onnxruntime1.15.1,可以正常轉(zhuǎn)換模型。
c++ 的onnx庫(kù),在你喜歡的目錄下下載解壓,設(shè)置環(huán)境變量即可。


wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

可以用下面的方式把環(huán)境變量設(shè)為永久

echo '# set env for onnxruntime' >> ~/.bashrc
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc

1.2 build mmdeploy

cd /the/root/path/of/MMDeploy
export MMDEPLOY_DIR=$(pwd)
mkdir -p build && cd build
cmake -DCMAKE_CXX_COMPILER=g++ -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc) && make install

輸出

(py3) xp@hello:/media/xp/data/pydoc/mmlab/mmdeploy/build$ cmake -DCMAKE_CXX_COMPILER=g++ -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
-- CMAKE_INSTALL_PREFIX: /media/xp/data/pydoc/mmlab/mmdeploy/build/install
-- Build ONNXRUNTIME custom ops.
-- Configuring done
-- Generating done
-- Build files have been written to: /media/xp/data/pydoc/mmlab/mmdeploy/build
(py3) xp@hello:/media/xp/data/pydoc/mmlab/mmdeploy/build$ make -j$(nproc) && make install
[ 12%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/common/ort_utils.cpp.o
[ 25%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/grid_sample/grid_sample.cpp.o
[ 37%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/modulated_deform_conv/modulated_deform_conv.cpp.o
[ 50%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/nms_match/nms_match.cpp.o
[ 62%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/nms_rotated/nms_rotated.cpp.o
[ 75%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/onnxruntime_register.cpp.o
[ 87%] Building CXX object csrc/mmdeploy/backend_ops/onnxruntime/CMakeFiles/mmdeploy_onnxruntime_ops_obj.dir/roi_align_rotated/roi_align_rotated.cpp.o
[ 87%] Built target mmdeploy_onnxruntime_ops_obj
[100%] Linking CXX shared library ../../../../lib/libmmdeploy_onnxruntime_ops.so
[100%] Built target mmdeploy_onnxruntime_ops
Consolidate compiler generated dependencies of target mmdeploy_onnxruntime_ops_obj
[ 87%] Built target mmdeploy_onnxruntime_ops_obj
[100%] Built target mmdeploy_onnxruntime_ops
Install the project...
-- Install configuration: "Release"
-- Installing: /media/xp/data/pydoc/mmlab/mmdeploy/mmdeploy/lib/libmmdeploy_onnxruntime_ops.so
-- Set runtime path of "/media/xp/data/pydoc/mmlab/mmdeploy/mmdeploy/lib/libmmdeploy_onnxruntime_ops.so" to "$ORIGIN"

1.3 install model converter

到mmdeploy的根目錄下執(zhí)行

mim install -e .

輸出

Downloading grpcio-1.62.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6/5.6 MB 599.6 kB/s eta 0:00:00
Downloading multiprocess-0.70.16-py38-none-any.whl (132 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.6/132.6 kB 673.0 kB/s eta 0:00:00
Downloading prettytable-3.10.0-py3-none-any.whl (28 kB)
Downloading terminaltables-3.1.10-py2.py3-none-any.whl (15 kB)
Downloading dill-0.3.8-py3-none-any.whl (116 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 116.3/116.3 kB 799.7 kB/s eta 0:00:00
Downloading wcwidth-0.2.13-py2.py3-none-any.whl (34 kB)
Installing collected packages: wcwidth, aenum, terminaltables, protobuf, prettytable, grpcio, dill, multiprocess, mmdeploy
  Attempting uninstall: protobuf
    Found existing installation: protobuf 3.20.3
    Uninstalling protobuf-3.20.3:
      Successfully uninstalled protobuf-3.20.3
  Running setup.py develop for mmdeploy
Successfully installed aenum-3.1.15 dill-0.3.8 grpcio-1.62.2 mmdeploy-1.3.1 multiprocess-0.70.16 prettytable-3.10.0 protobuf-3.20.2 terminaltables-3.1.10 wcwidth-0.2.13

2 模型轉(zhuǎn)換

reference3

2.1 deploy用法

python ./tools/deploy.py \
    ${DEPLOY_CFG_PATH} \
    ${MODEL_CFG_PATH} \
    ${MODEL_CHECKPOINT_PATH} \
    ${INPUT_IMG} \
    --test-img ${TEST_IMG} \
    --work-dir ${WORK_DIR} \
    --calib-dataset-cfg ${CALIB_DATA_CFG} \
    --device ${DEVICE} \
    --log-level INFO \
    --show \
    --dump-info

參數(shù)描述
deploy_cfg : mmdeploy 針對(duì)此模型的部署配置,包含推理框架類型、是否量化、輸入 shape 是否動(dòng)態(tài)等。配置文件之間可能有引用關(guān)系,configs/mmpretrain/classification_ncnn_static.py 是一個(gè)示例。

model_cfg : mm 算法庫(kù)的模型配置,例如 mmpretrain/configs/vision_transformer/vit-base-p32_ft-64xb64_in1k-384.py,與 mmdeploy 的路徑無(wú)關(guān)。

checkpoint : torch 模型路徑??梢?http/https 開(kāi)頭,詳見(jiàn) mmcv.FileClient 的實(shí)現(xiàn)。

img : 模型轉(zhuǎn)換時(shí),用做測(cè)試的圖像或點(diǎn)云文件路徑。

–test-img : 用于測(cè)試模型的圖像文件路徑。默認(rèn)設(shè)置成None。

–work-dir : 工作目錄,用來(lái)保存日志和模型文件。

–calib-dataset-cfg : 此參數(shù)只有int8模式下生效,用于校準(zhǔn)數(shù)據(jù)集配置文件。若在int8模式下未傳入?yún)?shù),則會(huì)自動(dòng)使用模型配置文件中的’val’數(shù)據(jù)集進(jìn)行校準(zhǔn)。

–device : 用于模型轉(zhuǎn)換的設(shè)備。 默認(rèn)是cpu,對(duì)于 trt 可使用 cuda:0 這種形式。

–log-level : 設(shè)置日記的等級(jí),選項(xiàng)包括’CRITICAL’, ‘FATAL’, ‘ERROR’, ‘WARN’, ‘WARNING’, ‘INFO’, ‘DEBUG’, ‘NOTSET’。 默認(rèn)是INFO。

–show : 是否顯示檢測(cè)的結(jié)果。

–dump-info : 是否輸出 SDK 信息。

2.2 示例

python tools/deploy.py \
configs/mmpretrain/classification_onnxruntime_static.py \
../mmpretrain/z_my_config/my_mobilenetv3.py \
../mmpretrain/work_dirs/my_mobilenetv3/epoch_150.pth \
/media/xp/data/image/deep_image/mini_cat_and_dog/val/cat/9835.jpg 
(py3) xp@hello:/media/xp/data/pydoc/mmlab/mmdeploy$ python tools/deploy.py configs/mmpretrain/classification_onnxruntime_static.py ../mmpretrain/z_my_config/my_mobilenetv3.py ../mmpretrain/work_dirs/my_mobilenetv3/epoch_150.pth /media/xp/data/image/deep_image/mini_cat_and_dog/val/cat/9835.jpg 
04/23 15:08:13 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
04/23 15:08:13 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
04/23 15:08:13 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "mmpretrain_tasks" registry tree. As a workaround, the current "mmpretrain_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
/home/xp/anaconda3/envs/py3/lib/python3.8/site-packages/mmcv/cnn/bricks/hsigmoid.py:35: UserWarning: In MMCV v1.4.4, we modified the default value of args to align with PyTorch official. Previous Implementation: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1). Current Implementation: Hsigmoid(x) = min(max((x + 3) / 6, 0), 1).
  warnings.warn(
Loads checkpoint by local backend from path: ../mmpretrain/work_dirs/my_mobilenetv3/epoch_150.pth
04/23 15:08:13 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future. 
04/23 15:08:13 - mmengine - INFO - Export PyTorch model to ONNX: /media/xp/data/pydoc/mmlab/mmdeploy/end2end.onnx.
04/23 15:08:13 - mmengine - INFO - Execute onnx optimize passes.
04/23 15:08:13 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
04/23 15:08:13 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
04/23 15:08:13 - mmengine - INFO - Finish pipeline mmdeploy.apis.utils.utils.to_backend
04/23 15:08:13 - mmengine - INFO - visualize onnxruntime model start.
04/23 15:08:15 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
04/23 15:08:15 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "mmpretrain_tasks" registry tree. As a workaround, the current "mmpretrain_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
04/23 15:08:15 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "backend_classifiers" registry tree. As a workaround, the current "backend_classifiers" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
04/23 15:08:15 - mmengine - INFO - Successfully loaded onnxruntime custom ops from /media/xp/data/pydoc/mmlab/mmdeploy/mmdeploy/lib/libmmdeploy_onnxruntime_ops.so
04/23 15:08:15 - mmengine - INFO - visualize onnxruntime model success.
04/23 15:08:15 - mmengine - INFO - visualize pytorch model start.
04/23 15:08:16 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
04/23 15:08:16 - mmengine - WARNING - Failed to search registry with scope "mmpretrain" in the "mmpretrain_tasks" registry tree. As a workaround, the current "mmpretrain_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpretrain" is a correct scope, or whether the registry is initialized.
/home/xp/anaconda3/envs/py3/lib/python3.8/site-packages/mmcv/cnn/bricks/hsigmoid.py:35: UserWarning: In MMCV v1.4.4, we modified the default value of args to align with PyTorch official. Previous Implementation: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1). Current Implementation: Hsigmoid(x) = min(max((x + 3) / 6, 0), 1).
  warnings.warn(
Loads checkpoint by local backend from path: ../mmpretrain/work_dirs/my_mobilenetv3/epoch_150.pth
04/23 15:08:17 - mmengine - INFO - visualize pytorch model success.
04/23 15:08:17 - mmengine - INFO - All process success.

輸出一個(gè)end2end.onnx
用netron打開(kāi)來(lái)看一切正常
mmdeploy框架轉(zhuǎn)化模型,mmlab,人工智能,計(jì)算機(jī)視覺(jué),深度學(xué)習(xí),機(jī)器學(xué)習(xí)文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-860085.html

到了這里,關(guān)于mmdeploy框架轉(zhuǎn)化模型的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 人工智能CSDN版AI和百度AI代碼轉(zhuǎn)化測(cè)試,C#、Java代碼轉(zhuǎn)Python

    工作中,需要完成以下的工作場(chǎng)景: 【場(chǎng)景】單據(jù)轉(zhuǎn)換不支持多選基礎(chǔ)資料下推; 【案例】通過(guò)單據(jù)轉(zhuǎn)換插件,實(shí)現(xiàn)應(yīng)收單單據(jù)頭的多選基礎(chǔ)資料下推到付款申請(qǐng)單的單據(jù)頭的多選基礎(chǔ)資料 原文鏈接:https://vip.kingdee.com/article/324304152484608000?productLineId=1 需要將原代碼轉(zhuǎn)換為

    2024年02月03日
    瀏覽(28)
  • 【人工智能業(yè)務(wù)概述】—人工智能的技術(shù)框架

    【人工智能業(yè)務(wù)概述】—人工智能的技術(shù)框架

    人工智能的技術(shù)框架按照產(chǎn)業(yè)生態(tài)通常可以劃分為基礎(chǔ)層、技術(shù)層、應(yīng)用層三大板塊。其中,基礎(chǔ)層提供了支撐人工智能應(yīng)用的基礎(chǔ)設(shè)施和技術(shù),包括存儲(chǔ)和處理大規(guī)模數(shù)據(jù)的能力,以及高性能的計(jì)算和通信基礎(chǔ)設(shè)施;技術(shù)層提供了各種人工智能技術(shù)和算法,用于處理和分析

    2024年02月02日
    瀏覽(37)
  • 人工智能學(xué)習(xí)框架—飛槳Paddle人工智能

    人工智能學(xué)習(xí)框架—飛槳Paddle人工智能

    機(jī)器學(xué)習(xí)的三要素:模型、學(xué)習(xí)策略、優(yōu)化算法。 當(dāng)我們用機(jī)器學(xué)習(xí)來(lái)解決一些模式識(shí)別任務(wù)時(shí),一般的流程包含以下幾個(gè)步驟: 淺層學(xué)習(xí) (Shallow Learning):不涉及特征學(xué)習(xí),其特征主要靠人工經(jīng)驗(yàn)或特征轉(zhuǎn)換方法來(lái)抽取。 底層特征VS高層語(yǔ)義:人們對(duì)文本、圖像的理解無(wú)法

    2024年02月12日
    瀏覽(7)
  • 人工智能與開(kāi)源機(jī)器學(xué)習(xí)框架

    鏈接:華為機(jī)考原題 TensorFlow是一個(gè)開(kāi)源的機(jī)器學(xué)習(xí)框架,由Google開(kāi)發(fā)和維護(hù)。它提供了一個(gè)針對(duì)神經(jīng)網(wǎng)絡(luò)和深度學(xué)習(xí)的強(qiáng)大工具集,能夠幫助開(kāi)發(fā)人員構(gòu)建和訓(xùn)練各種機(jī)器學(xué)習(xí)模型。 TensorFlow的基本概念包括: 張量(Tensor):張量是TensorFlow中的核心數(shù)據(jù)結(jié)構(gòu),它表示多維數(shù)

    2024年02月22日
    瀏覽(24)
  • 搭建本地人工智能框架LocalAI

    搭建本地人工智能框架LocalAI

    什么是 LocalAI LocalAI 是一個(gè)用于本地推理的,與 OpenAI API 規(guī)范兼容的 REST API 。它允許您在本地使用消費(fèi)級(jí)硬件運(yùn)行 LLM (不僅如此),支持與 ggml 格式兼容的多個(gè)模型系列。不需要 GPU 。 最吸引老蘇的有兩點(diǎn),一個(gè)是不需要 GPU ,另一點(diǎn)上可以使用消費(fèi)級(jí)硬件,所以準(zhǔn)備搭一個(gè)

    2024年02月08日
    瀏覽(29)
  • 人工智能的倫理標(biāo)準(zhǔn)與法律框架

    作者:禪與計(jì)算機(jī)程序設(shè)計(jì)藝術(shù) 當(dāng)前隨著人工智能技術(shù)的日益成熟和應(yīng)用,人工智能帶來(lái)的危害也越來(lái)越多。深刻的影響和法律上的責(zé)任使得人工智能領(lǐng)域成為一個(gè)重要的法律議題。近年來(lái),關(guān)于“人工智能”的法律問(wèn)題也越來(lái)越多,圍繞人工智能的法律問(wèn)題也越來(lái)越復(fù)雜。

    2024年02月08日
    瀏覽(49)
  • 人工智能_CPU安裝運(yùn)行ChatGLM大模型_安裝清華開(kāi)源人工智能AI大模型ChatGlm-6B_004---人工智能工作筆記0099

    人工智能_CPU安裝運(yùn)行ChatGLM大模型_安裝清華開(kāi)源人工智能AI大模型ChatGlm-6B_004---人工智能工作筆記0099

    上一節(jié)003節(jié)我們安裝到最后,本來(lái)大模型都可以回答問(wèn)題了,結(jié)果, 5分鐘后給出提示,需要GPU,我去..繼續(xù)看官網(wǎng),如何配置CPU運(yùn)行 ?沒(méi)辦法繼續(xù)看: 這里是官網(wǎng)可以看到? 需要gcc的版本是11.3.0,這里我們先沒(méi)有去安裝,直接試試再說(shuō) yum install epel-release yum install gcc-11.3.0 安裝的話執(zhí)行這

    2024年02月21日
    瀏覽(30)
  • TensorFlow人工智能開(kāi)源深度學(xué)習(xí)框架簡(jiǎn)單認(rèn)識(shí)

    TensorFlow是一個(gè)使用數(shù)據(jù)流圖進(jìn)行數(shù)值計(jì)算的開(kāi)源深度學(xué)習(xí)框架。它由Google Brain團(tuán)隊(duì)開(kāi)發(fā),并于2015年開(kāi)源發(fā)布。TensorFlow的核心概念是使用圖表示計(jì)算任務(wù),其中節(jié)點(diǎn)表示操作,邊表示數(shù)據(jù)流動(dòng)。 TensorFlow被廣泛用于機(jī)器學(xué)習(xí)和深度學(xué)習(xí)任務(wù)。它的特點(diǎn)包括: 強(qiáng)大的計(jì)算能力:

    2024年01月21日
    瀏覽(33)
  • 從AI人工智能LLM大型語(yǔ)言模型到通用人工智能AGI “世界模型”的演進(jìn)路徑

    近年來(lái),人工智能技術(shù)取得了飛速的發(fā)展,各種領(lǐng)域都出現(xiàn)了涉及人工智能的應(yīng)用。大型語(yǔ)言模型(Large Language Model, LLM)作為其中一種重要的技術(shù)手段,已成為當(dāng)前自然

    2024年02月08日
    瀏覽(111)
  • 【人工智能】MAAS 模型即服務(wù):概念、應(yīng)用場(chǎng)景、優(yōu)勢(shì)、挑戰(zhàn)等 —— 我們?nèi)祟愐呀?jīng)進(jìn)入人工智能大模型時(shí)代

    目錄 導(dǎo)言 一、MAAS概述 二、MAAS的應(yīng)用場(chǎng)景 1. 自然語(yǔ)言處理(NLP)

    2024年02月06日
    瀏覽(102)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包