目錄
1.FastDeploy介紹
2. 通過FastDeploy C++ 部署PaddleSeg模型
1.FastDeploy介紹
??FastDeploy是一款全場(chǎng)景、易用靈活、極致高效的AI推理部署工具, 支持云邊端部署。提供超過???160+?Text,Vision,?Speech和跨模態(tài)模型??開箱即用的部署體驗(yàn),并實(shí)現(xiàn)??端到端的推理性能優(yōu)化,滿足開發(fā)者多場(chǎng)景、多硬件、多平臺(tái)的產(chǎn)業(yè)部署需求。
近期更新
-
FastDeploy系列直播課程回放
-
2023.01.17?發(fā)布?YOLOv8?在FastDeploy系列硬件的部署支持。 其中包括?Paddle YOLOv8?以及?社區(qū) ultralytics YOLOv8
- Paddle YOLOv8?可以部署的硬件:Intel CPU、NVIDIA GPU、Jetson、飛騰、昆侖芯、昇騰、ARM CPU、RK3588?和?Sophgo TPU, 部分硬件包含?Python?部署和?C++?部署;
- 社區(qū) ultralytics YOLOv8?可以部署的硬件:Intel CPU、NVIDIA GPU、Jetson,均包含?Python?部署和?C++?部署;
- FastDeploy 一行模型API切換,可以實(shí)現(xiàn)YOLOv8、?PP-YOLOE+、YOLOv5?等模型性能對(duì)比。
-
服務(wù)化部署結(jié)合VisualDL新增支持可視化部署。在FastDeploy容器中啟動(dòng)VDL服務(wù)后,即可在VDL界面修改模型配置、啟動(dòng)/管理模型服務(wù)、查看性能數(shù)據(jù)、發(fā)送請(qǐng)求等,詳細(xì)操作可參考相關(guān)文檔
- Serving可視化部署
- Serving可視化請(qǐng)求
????????使用FastDeploy可以簡(jiǎn)單高效的在X86 CPU、NVIDIA GPU、飛騰CPU、ARM CPU、Intel GPU、昆侖、昇騰、瑞芯微、晶晨、算能等10+款硬件上對(duì)PaddleSeg語義分割模型進(jìn)行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多種推理后端。
2. 通過FastDeploy C++ 部署PaddleSeg模型
支持PaddleSeg高于2.6版本的Segmentation模型,如果部署的為PP-Matting、PP-HumanMatting以及ModNet請(qǐng)參考Matting模型部署。目前FastDeploy測(cè)試過成功部署的模型:
- U-Net系列模型
- PP-LiteSeg系列模型
- PP-HumanSeg系列模型
- FCN系列模型
- DeepLabV3系列模型
- SegFormer系列模型
支持CpuInfer、GpuInfer、TrtInfer三種推理模式
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "fastdeploy/vision.h"
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif
void CpuInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "deploy.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseCpu();
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::SegmentationResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}
void GpuInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "deploy.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::SegmentationResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}
void TrtInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "deploy.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
// If use original Tensorrt, not Paddle-TensorRT,
// comment the following two lines
option.EnablePaddleToTrt();
option.EnablePaddleTrtCollectShape();
option.SetTrtInputShape("x", {1, 3, 256, 256}, {1, 3, 1024, 1024},
{1, 3, 2048, 2048});
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
auto im = cv::imread(image_file);
fastdeploy::vision::SegmentationResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}
int main(int argc, char* argv[]) {
std::string model_dir = "model\\PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer";
std::string image_file = "model\\cityscapes_demo.png";
// CpuInfer(argv[1], argv[2]);
GpuInfer(model_dir, image_file);
// TrtInfer(argv[1], argv[2]);
return 0;
}
推理結(jié)果可視化:
文章來源:http://www.zghlxwxcb.cn/news/detail-414932.html
文章來源地址http://www.zghlxwxcb.cn/news/detail-414932.html
到了這里,關(guān)于FastDeploy:PaddleSeg C++部署方式(一)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!