paper:https://arxiv.org/pdf/2202.03046
paper還沒怎么看,有時間了再回來把原理補上…
這篇文章的工作是在 FaceShifter 為 baseline 上進行的,提出了:
- 新的 eye-based 損失;
- 新的 face mask 平滑方法;
- 新的視頻人臉交換pipeline;
- 新的用于減少相鄰幀和SR階段面部抖動的穩(wěn)定技術(shù)。
git:https://github.com/ai-forever/ghost
支持 【視頻里單個人臉轉(zhuǎn)換】、【視頻里多個人臉轉(zhuǎn)換】、【圖片換臉】、【訓練換臉模型】
我的dockerfile如下:文章來源:http://www.zghlxwxcb.cn/news/detail-528405.html
FROM pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel
RUN echo "" > /etc/apt/sources.list.d/cuda.list
RUN sed -i "s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g" /etc/apt/sources.list
RUN sed -i "s@/security.ubuntu.com/@/mirrors.aliyun.com/@g" /etc/apt/sources.list
RUN apt-get update --fix-missing && apt-get install -y fontconfig --fix-missing
RUN apt-get install -y vim
RUN apt-get install -y python3.7 python3-pip
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple scipy matplotlib seaborn h5py sklearn numpy==1.20.3 pandas==1.3.5
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-python
RUN apt-get install libgl1-mesa-glx -y
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple onnx==1.9.0 onnxruntime-gpu==1.4.0
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple mxnet-cu101mkl
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple scikit-image insightface==0.2.1 requests==2.25.1 kornia==0.5.4 dill wandb
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple protobuf==3.19.0
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple ffmpeg
RUN apt-get install ffmpeg -y
WORKDIR /home
# docker build -t wgs-torch/faceswap:ghost -f ./dk/Dockerfile .
shell:文章來源地址http://www.zghlxwxcb.cn/news/detail-528405.html
#!/bin/bash
today=$(date -d "now" +%Y-%m-%d)
yesterday=$(date -d "yesterday" +%Y-%m-%d)
cd /data/wgs/face_swap/ghost
## 視頻里一個人臉
#SOURCE_PATHA="\
# --source_paths ./examples/images/p1.jpg \
# "
#
#VIDEO_PATH="\
# --target_video ./examples/videos/inVideo1.mp4 \
# --out_video_name ./examples/results/o1_1_1.mp4 \
# "
# 視頻里多個人臉
SOURCE_PATHA="\
--source_paths ./examples/images/p1.jpg ./examples/images/p2.jpg \
--target_faces_paths ./examples/images/tgt1.png ./examples/images/tgt2.png \
"
VIDEO_PATH="\
--target_video ./examples/videos/dirtydancing.mp4 \
--out_video_name ./examples/results/o_multi.mp4 \
"
options=" \
$SOURCE_PATHA \
$VIDEO_PATH \
"
docker run -d --gpus '"device=1"' \
--rm -it --name face_swap \
--shm-size 15G \
-v /data/wgs/face_swap:/home \
wgs-torch/faceswap:ghost \
sh -c "python3 /home/ghost/inference.py $options 1>>/home/log/ghost.log 2>>/home/log/ghost.err"
# nohup sh /data/wgs/face_swap/dk/ghost.sh &
到了這里,關于基于GHOST-A的AI視頻換臉的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!