一、人臉修復(fù)算法
1.算法簡(jiǎn)介
CodeFormer是一種基于AI技術(shù)深度學(xué)習(xí)的人臉復(fù)原模型,由南洋理工大學(xué)和商湯科技聯(lián)合研究中心聯(lián)合開(kāi)發(fā),它能夠接收模糊或馬賽克圖像作為輸入,并生成更清晰的原始圖像。算法源碼地址:https://github.com/sczhou/CodeFormer
這種技術(shù)在圖像修復(fù)、圖像增強(qiáng)和隱私保護(hù)等領(lǐng)域可能會(huì)有廣泛的應(yīng)用。算法是由南洋理工大學(xué)和商湯科技聯(lián)合研究中心聯(lián)合開(kāi)發(fā)的,結(jié)合了VQGAN和Transformer。
VQGAN是一個(gè)生成模型,通常用于圖像生成任務(wù)。它使用了向量量化技術(shù),將圖像編碼成一系列離散的向量,然后通過(guò)解碼器將這些向量轉(zhuǎn)化為圖像。這種方法通常能夠生成高質(zhì)量的圖像,尤其在與Transformer等神經(jīng)網(wǎng)絡(luò)結(jié)合使用時(shí)。
Transformer是一種廣泛用于自然語(yǔ)言處理和計(jì)算機(jī)視覺(jué)等領(lǐng)域的神經(jīng)網(wǎng)絡(luò)架構(gòu)。它在序列數(shù)據(jù)處理中表現(xiàn)出色,也可以用于圖像生成和處理任務(wù)。
在監(jiān)控、安全和隱私保護(hù)領(lǐng)域,人臉圖像通常會(huì)受到多種因素的影響,其中包括光照、像素限制、聚焦問(wèn)題和人體運(yùn)動(dòng)等。這些因素可能導(dǎo)致圖像模糊、變形或者包含大量的噪聲。在這種情況下,嘗試恢復(fù)清晰的原始人臉圖像是一個(gè)極具挑戰(zhàn)性的任務(wù)。
盲人臉復(fù)原是一個(gè)不適定問(wèn)題(ill-posed problem),這意味著存在多個(gè)可能的解決方案,而且從有限的觀察數(shù)據(jù)中無(wú)法唯一確定真實(shí)的原始圖像。因此,在這個(gè)領(lǐng)域中,通常需要依賴(lài)先進(jìn)的計(jì)算機(jī)視覺(jué)和圖像處理技術(shù),以及深度學(xué)習(xí)模型,來(lái)嘗試改善模糊或受損圖像的質(zhì)量。
一些方法和技術(shù)可以用于處理盲人臉復(fù)原問(wèn)題,包括但不限于:
深度學(xué)習(xí)模型: 使用卷積神經(jīng)網(wǎng)絡(luò)(CNN)和生成對(duì)抗網(wǎng)絡(luò)(GAN)等深度學(xué)習(xí)模型,可以嘗試從模糊或變形的人臉圖像中恢復(fù)原始細(xì)節(jié)。
超分辨率技術(shù): 超分辨率方法旨在從低分辨率圖像中重建高分辨率圖像,這也可以用于人臉圖像復(fù)原。
先驗(yàn)知識(shí): 利用先驗(yàn)知識(shí),如人臉結(jié)構(gòu)、光照模型等,可以幫助提高復(fù)原的準(zhǔn)確性。
多模態(tài)融合: 結(jié)合不同傳感器和信息源的數(shù)據(jù),可以提高復(fù)原的魯棒性。
然而,即使使用這些技術(shù),由于問(wèn)題的不適定性,完全恢復(fù)清晰的原始人臉圖像仍然可能是一項(xiàng)極具挑戰(zhàn)性的任務(wù),特別是在極端條件下。在實(shí)際應(yīng)用中,可能需要權(quán)衡圖像質(zhì)量和可用的信息,以達(dá)到最佳的結(jié)果。
2.算法效果
在官方公布修復(fù)的人臉效果里面,可以看到算法在各種輸入的修復(fù)效果:
老照片修復(fù)
人臉修復(fù)
黑白人臉圖像增強(qiáng)修復(fù)
人臉恢復(fù)
二、模型部署
如果想用C++進(jìn)行模型推理部署,首先要把模型轉(zhuǎn)換成onnx,轉(zhuǎn)成onnx就可以使用onnxruntime c++庫(kù)進(jìn)行部署,或者使用OpenCV的DNN也可以,轉(zhuǎn)成onnx后,還可以再轉(zhuǎn)成ncnn模型使用ncnn進(jìn)行模型部署。原模型可以從官方開(kāi)源界面可以下載。
模型推理這塊有兩種做法,一是不用判斷有沒(méi)有人臉,直接對(duì)全圖進(jìn)行超分,但這種方法好像對(duì)本來(lái)是清晰的圖像會(huì)出現(xiàn)bug,就是生成一些無(wú)法理解的處理。
1. C++使用onnxruntime部署模型
#include "CodeFormer.h"
CodeFormer::CodeFormer(std::string model_path)
{
//OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); ///nvidia-cuda加速
sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC);
std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); ///如果在windows系統(tǒng)就這么寫(xiě)
ort_session = new Ort::Session(env, widestr.c_str(), sessionOptions); ///如果在windows系統(tǒng)就這么寫(xiě)
///ort_session = new Session(env, model_path.c_str(), sessionOptions); ///如果在linux系統(tǒng),就這么寫(xiě)
size_t numInputNodes = ort_session->GetInputCount();
size_t numOutputNodes = ort_session->GetOutputCount();
Ort::AllocatorWithDefaultOptions allocator;
for (int i = 0; i < numInputNodes; i++)
{
input_names.push_back(ort_session->GetInputName(i, allocator));
Ort::TypeInfo input_type_info = ort_session->GetInputTypeInfo(i);
auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo();
auto input_dims = input_tensor_info.GetShape();
input_node_dims.push_back(input_dims);
}
for (int i = 0; i < numOutputNodes; i++)
{
output_names.push_back(ort_session->GetOutputName(i, allocator));
Ort::TypeInfo output_type_info = ort_session->GetOutputTypeInfo(i);
auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo();
auto output_dims = output_tensor_info.GetShape();
output_node_dims.push_back(output_dims);
}
this->inpHeight = input_node_dims[0][2];
this->inpWidth = input_node_dims[0][3];
this->outHeight = output_node_dims[0][2];
this->outWidth = output_node_dims[0][3];
input2_tensor.push_back(0.5);
}
void CodeFormer::preprocess(cv::Mat &srcimg)
{
cv::Mat dstimg;
cv::cvtColor(srcimg, dstimg, cv::COLOR_BGR2RGB);
resize(dstimg, dstimg, cv::Size(this->inpWidth, this->inpHeight), cv::INTER_LINEAR);
this->input_image_.resize(this->inpWidth * this->inpHeight * dstimg.channels());
int k = 0;
for (int c = 0; c < 3; c++)
{
for (int i = 0; i < this->inpHeight; i++)
{
for (int j = 0; j < this->inpWidth; j++)
{
float pix = dstimg.ptr<uchar>(i)[j * 3 + c];
this->input_image_[k] = (pix / 255.0 - 0.5) / 0.5;
k++;
}
}
}
}
cv::Mat CodeFormer::detect(cv::Mat &srcimg)
{
int im_h = srcimg.rows;
int im_w = srcimg.cols;
this->preprocess(srcimg);
std::array<int64_t, 4> input_shape_{ 1, 3, this->inpHeight, this->inpWidth };
std::vector<int64_t> input2_shape_ = { 1 };
auto allocator_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
std::vector<Ort::Value> ort_inputs;
ort_inputs.push_back(Ort::Value::CreateTensor<float>(allocator_info, input_image_.data(), input_image_.size(), input_shape_.data(), input_shape_.size()));
ort_inputs.push_back(Ort::Value::CreateTensor<double>(allocator_info, input2_tensor.data(), input2_tensor.size(), input2_shape_.data(), input2_shape_.size()));
std::vector<Ort::Value> ort_outputs = ort_session->Run(Ort::RunOptions{ nullptr }, input_names.data(), ort_inputs.data(), ort_inputs.size(), output_names.data(), output_names.size());
post_process
float* pred = ort_outputs[0].GetTensorMutableData<float>();
//cv::Mat mask(outHeight, outWidth, CV_32FC3, pred); /經(jīng)過(guò)試驗(yàn),直接這樣賦值,是不行的
const unsigned int channel_step = outHeight * outWidth;
std::vector<cv::Mat> channel_mats;
cv::Mat rmat(outHeight, outWidth, CV_32FC1, pred); // R
cv::Mat gmat(outHeight, outWidth, CV_32FC1, pred + channel_step); // G
cv::Mat bmat(outHeight, outWidth, CV_32FC1, pred + 2 * channel_step); // B
channel_mats.push_back(rmat);
channel_mats.push_back(gmat);
channel_mats.push_back(bmat);
cv::Mat mask;
merge(channel_mats, mask); // CV_32FC3 allocated
///不用for循環(huán)遍歷cv::Mat里的每個(gè)像素值,實(shí)現(xiàn)numpy.clip函數(shù)
mask.setTo(this->min_max[0], mask < this->min_max[0]);
mask.setTo(this->min_max[1], mask > this->min_max[1]); 也可以用threshold函數(shù),閾值類(lèi)型THRESH_TOZERO_INV
mask = (mask - this->min_max[0]) / (this->min_max[1] - this->min_max[0]);
mask *= 255.0;
mask.convertTo(mask, CV_8UC3);
//cvtColor(mask, mask, cv::COLOR_BGR2RGB);
return mask;
}
void CodeFormer::detect_video(const std::string& video_path,const std::string& output_path, unsigned int writer_fps)
{
cv::VideoCapture video_capture(video_path);
if (!video_capture.isOpened())
{
std::cout << "Can not open video: " << video_path << "\n";
return;
}
cv::Size S = cv::Size((int)video_capture.get(cv::CAP_PROP_FRAME_WIDTH),
(int)video_capture.get(cv::CAP_PROP_FRAME_HEIGHT));
cv::VideoWriter output_video(output_path, cv::VideoWriter::fourcc('m', 'p', '4', 'v'),
30.0, S);
if (!output_video.isOpened())
{
std::cout << "Can not open writer: " << output_path << "\n";
return;
}
cv::Mat cv_mat;
while (video_capture.read(cv_mat))
{
cv::Mat cv_dst = detect(cv_mat);
output_video << cv_dst;
}
video_capture.release();
output_video.release();
}
先試試官方給的樣本的效果:
薄馬賽克的超分效果:
厚馬賽克的超分效果不是很好,就是有點(diǎn)貼臉的感覺(jué):
如果是已經(jīng)是清晰的圖像,超分之后不是很理想,基本上是不能用的,onnx這個(gè)效果只能優(yōu)化人臉:文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-774213.html
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-774213.html
2.onnx模型python推理
import os
import cv2
import argparse
import glob
import torch
import torch.onnx
from torchvision.transforms.functional import normalize
from basicsr.utils import imwrite, img2tensor, tensor2img
from basicsr.utils.download_util import load_file_from_url
from basicsr.utils.misc import gpu_is_available, get_device
from facelib.utils.face_restoration_helper import FaceRestoreHelper
from facelib.utils.misc import is_gray
import onnxruntime as ort
from basicsr.utils.registry import ARCH_REGISTRY
pretrain_model_url = {
'restoration': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
}
if __name__ == '__main__':
# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device = get_device()
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input_path', type=str, default='./inputs/whole_imgs',
help='Input image, video or folder. Default: inputs/whole_imgs')
parser.add_argument('-o', '--output_path', type=str, default=None,
help='Output folder. Default: results/<input_name>_<w>')
parser.add_argument('-w', '--fidelity_weight', type=float, default=0.5,
help='Balance the quality and fidelity. Default: 0.5')
parser.add_argument('-s', '--upscale', type=int, default=2,
help='The final upsampling scale of the image. Default: 2')
parser.add_argument('--has_aligned', action='store_true', help='Input are cropped and aligned faces. Default: False')
parser.add_argument('--only_center_face', action='store_true', help='Only restore the center face. Default: False')
parser.add_argument('--draw_box', action='store_true', help='Draw the bounding box for the detected faces. Default: False')
# large det_model: 'YOLOv5l', 'retinaface_resnet50'
# small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
parser.add_argument('--detection_model', type=str, default='retinaface_resnet50',
help='Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n, dlib. \
Default: retinaface_resnet50')
parser.add_argument('--bg_upsampler', type=str, default='None', help='Background upsampler. Optional: realesrgan')
parser.add_argument('--face_upsample', action='store_true', help='Face upsampler after enhancement. Default: False')
parser.add_argument('--bg_tile', type=int, default=400, help='Tile size for background sampler. Default: 400')
parser.add_argument('--suffix', type=str, default=None, help='Suffix of the restored faces. Default: None')
parser.add_argument('--save_video_fps', type=float, default=None, help='Frame rate for saving video. Default: None')
args = parser.parse_args()
# ------------------------ input & output ------------------------
w = args.fidelity_weight
input_video = False
if args.input_path.endswith(('jpg', 'jpeg', 'png', 'JPG', 'JPEG', 'PNG')): # input single img path
input_img_list = [args.input_path]
result_root = f'results/test_img_{w}'
# elif args.input_path.endswith(('mp4', 'mov', 'avi', 'MP4', 'MOV', 'AVI')): # input video path
# from basicsr.utils.video_util import VideoReader, VideoWriter
# input_img_list = []
# vidreader = VideoReader(args.input_path)
# image = vidreader.get_frame()
# while image is not None:
# input_img_list.append(image)
# image = vidreader.get_frame()
# audio = vidreader.get_audio()
# fps = vidreader.get_fps() if args.save_video_fps is None else args.save_video_fps
# video_name = os.path.basename(args.input_path)[:-4]
# result_root = f'results/{video_name}_{w}'
# input_video = True
# vidreader.close()
# else: # input img folder
# if args.input_path.endswith('/'): # solve when path ends with /
# args.input_path = args.input_path[:-1]
# # scan all the jpg and png images
# input_img_list = sorted(glob.glob(os.path.join(args.input_path, '*.[jpJP][pnPN]*[gG]')))
# result_root = f'results/{os.path.basename(args.input_path)}_{w}'
else:
raise ValueError("wtf???")
if not args.output_path is None: # set output path
result_root = args.output_path
test_img_num = len(input_img_list)
if test_img_num == 0:
raise FileNotFoundError('No input image/video is found...\n'
'\tNote that --input_path for video should end with .mp4|.mov|.avi')
# # ------------------ set up background upsampler ------------------
# if args.bg_upsampler == 'realesrgan':
# bg_upsampler = set_realesrgan()
# else:
# bg_upsampler = None
# # ------------------ set up face upsampler ------------------
# if args.face_upsample:
# if bg_upsampler is not None:
# face_upsampler = bg_upsampler
# else:
# face_upsampler = set_realesrgan()
# else:
# face_upsampler = None
# ------------------ set up CodeFormer restorer -------------------
net = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9,
connect_list=['32', '64', '128', '256']).to(device)
# ckpt_path = 'weights/CodeFormer/codeformer.pth'
ckpt_path = load_file_from_url(url=pretrain_model_url['restoration'],
model_dir='weights/CodeFormer', progress=True, file_name=None)
checkpoint = torch.load(ckpt_path)['params_ema']
net.load_state_dict(checkpoint)
net.eval()
# # ------------------ set up FaceRestoreHelper -------------------
# # large det_model: 'YOLOv5l', 'retinaface_resnet50'
# # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
# if not args.has_aligned:
# print(f'Face detection model: {args.detection_model}')
# # if bg_upsampler is not None:
# # print(f'Background upsampling: True, Face upsampling: {args.face_upsample}')
# # else:
# # print(f'Background upsampling: False, Face upsampling: {args.face_upsample}')
# else:
# raise ValueError("wtf???")
face_helper = FaceRestoreHelper(
args.upscale,
face_size=512,
crop_ratio=(1, 1),
# det_model = args.detection_model,
# save_ext='png',
# use_parse=True,
# device=device
)
# -------------------- start to processing ---------------------
for i, img_path in enumerate(input_img_list):
# # clean all the intermediate results to process the next image
# face_helper.clean_all()
if isinstance(img_path, str):
img_name = os.path.basename(img_path)
basename, ext = os.path.splitext(img_name)
print(f'[{i+1}/{test_img_num}] Processing: {img_name}')
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
# else: # for video processing
# basename = str(i).zfill(6)
# img_name = f'{video_name}_{basename}' if input_video else basename
# print(f'[{i+1}/{test_img_num}] Processing: {img_name}')
# img = img_path
if args.has_aligned:
# the input faces are already cropped and aligned
img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
# face_helper.is_gray = is_gray(img, threshold=10)
# if face_helper.is_gray:
# print('Grayscale input: True')
face_helper.cropped_faces = [img]
# else:
# face_helper.read_image(img)
# # get face landmarks for each face
# num_det_faces = face_helper.get_face_landmarks_5(
# only_center_face=args.only_center_face, resize=640, eye_dist_threshold=5)
# print(f'\tdetect {num_det_faces} faces')
# # align and warp each face
# face_helper.align_warp_face()
else:
raise ValueError("wtf???")
# face restoration for each cropped face
for idx, cropped_face in enumerate(face_helper.cropped_faces):
# prepare data
cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
try:
with torch.no_grad():
# output = net(cropped_face_t, w=w, adain=True)[0]
# output = net(cropped_face_t)[0]
output = net(cropped_face_t, w)[0]
restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
del output
# torch.cuda.empty_cache()
except Exception as error:
print(f'\tFailed inference for CodeFormer: {error}')
restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
# now, export the "net" codeformer to onnx
print("Exporting CodeFormer to ONNX...")
torch.onnx.export(net,
# (cropped_face_t,),
(cropped_face_t,w),
"codeformer.onnx",
# verbose=True,
export_params=True,
opset_version=11,
do_constant_folding=True,
input_names = ['x','w'],
output_names = ['y'],
)
# now, try to load the onnx model and run it
print("Loading CodeFormer ONNX...")
ort_session = ort.InferenceSession("codeformer.onnx", providers=['CPUExecutionProvider'])
print("Running CodeFormer ONNX...")
ort_inputs = {
ort_session.get_inputs()[0].name: cropped_face_t.cpu().numpy(),
ort_session.get_inputs()[1].name: torch.tensor(w).double().cpu().numpy(),
}
ort_outs = ort_session.run(None, ort_inputs)
restored_face_onnx = tensor2img(torch.from_numpy(ort_outs[0]), rgb2bgr=True, min_max=(-1, 1))
restored_face_onnx = restored_face_onnx.astype('uint8')
restored_face = restored_face.astype('uint8')
print("Comparing CodeFormer outputs...")
# see how similar the outputs are: flatten and then compute all the differences
diff = (restored_face_onnx.astype('float32') - restored_face.astype('float32')).flatten()
# calculate min, max, mean, and std
min_diff = diff.min()
max_diff = diff.max()
mean_diff = diff.mean()
std_diff = diff.std()
print(f"Min diff: {min_diff}, Max diff: {max_diff}, Mean diff: {mean_diff}, Std diff: {std_diff}")
# face_helper.add_restored_face(restored_face, cropped_face)
face_helper.add_restored_face(restored_face_onnx, cropped_face)
# # paste_back
# if not args.has_aligned:
# # upsample the background
# if bg_upsampler is not None:
# # Now only support RealESRGAN for upsampling background
# bg_img = bg_upsampler.enhance(img, outscale=args.upscale)[0]
# else:
# bg_img = None
# face_helper.get_inverse_affine(None)
# # paste each restored face to the input image
# if args.face_upsample and face_upsampler is not None:
# restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box, face_upsampler=face_upsampler)
# else:
# restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box)
# save faces
for idx, (cropped_face, restored_face) in enumerate(zip(face_helper.cropped_faces, face_helper.restored_faces)):
# save cropped face
if not args.has_aligned:
save_crop_path = os.path.join(result_root, 'cropped_faces', f'{basename}_{idx:02d}.png')
imwrite(cropped_face, save_crop_path)
# save restored face
if args.has_aligned:
save_face_name = f'{basename}.png'
else:
save_face_name = f'{basename}_{idx:02d}.png'
if args.suffix is not None:
save_face_name = f'{save_face_name[:-4]}_{args.suffix}.png'
save_restore_path = os.path.join(result_root, 'restored_faces', save_face_name)
imwrite(restored_face, save_restore_path)
# # save restored img
# if not args.has_aligned and restored_img is not None:
# if args.suffix is not None:
# basename = f'{basename}_{args.suffix}'
# save_restore_path = os.path.join(result_root, 'final_results', f'{basename}.png')
# imwrite(restored_img, save_restore_path)
# # save enhanced video
# if input_video:
# print('Video Saving...')
# # load images
# video_frames = []
# img_list = sorted(glob.glob(os.path.join(result_root, 'final_results', '*.[jp][pn]g')))
# for img_path in img_list:
# img = cv2.imread(img_path)
# video_frames.append(img)
# # write images to video
# height, width = video_frames[0].shape[:2]
# if args.suffix is not None:
# video_name = f'{video_name}_{args.suffix}.png'
# save_restore_path = os.path.join(result_root, f'{video_name}.mp4')
# vidwriter = VideoWriter(save_restore_path, height, width, fps, audio)
# for f in video_frames:
# vidwriter.write_frame(f)
# vidwriter.close()
print(f'\nAll results are saved in {result_root}')
到了這里,關(guān)于人臉修復(fù)祛馬賽克算法CodeFormer——C++與Python模型部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!