目錄
一.引言
二.服務(wù)搭建
1.服務(wù)配置
2.服務(wù)代碼
3.服務(wù)踩坑
三.服務(wù)使用
1.服務(wù)啟動(dòng)
2.服務(wù)調(diào)用
3.服務(wù)結(jié)果
四.總結(jié)
一.引言
上一篇文章我們介紹了如果使用 conda 搭建 Bert-VITS2 最新版本的環(huán)境并訓(xùn)練自定義語(yǔ)音,通過(guò) 1000 個(gè) epoch 的訓(xùn)練,我們得到了自定義語(yǔ)音模型,本文基于上文得到的生成器模型介紹如何部署語(yǔ)音推理服務(wù),獲取自定義角色音頻。
Tips:??
訓(xùn)練流程:??Bert-VITS2 自定義訓(xùn)練語(yǔ)音
二.服務(wù)搭建
1.服務(wù)配置
查看項(xiàng)目根目錄下的配置文件修改對(duì)應(yīng)配置:
vim config.yml
這里主要修改如下幾點(diǎn):
- port 修改服務(wù)監(jiān)聽(tīng)的端口,主要不要與其他服務(wù)的端口重復(fù)
- models 自定義生成的模型內(nèi) G-xxxx.pth 為對(duì)應(yīng)的生成器,可以嘗試不同 Epoch 的模型都可以
- config 配置文件讀取 ./configs/config.json 內(nèi)的配置
- launguage 博主使用中文 ZH、大家如果是其他語(yǔ)言的話也可以修改?
server:
# 端口號(hào)
port: 9876
# 模型默認(rèn)使用設(shè)備:但是當(dāng)前并沒(méi)有實(shí)現(xiàn)這個(gè)配置。
device: "cuda"
# 需要加載的所有模型的配置,可以填多個(gè)模型,也可以不填模型,等網(wǎng)頁(yè)成功后手動(dòng)加載模型
# 不加載模型的配置格式:刪除默認(rèn)給的兩個(gè)模型配置,給models賦值 [ ],也就是空列表。參考模型2的speakers 即 models: [ ]
# 注意,所有模型都必須正確配置model與config的路徑,空路徑會(huì)導(dǎo)致加載錯(cuò)誤。也可以不填模型,等網(wǎng)頁(yè)加載成功后手動(dòng)填寫(xiě)models。
models:
- # 模型的路徑
model: "data/models/G_15000.pth"
# 模型config.json的路徑
config: "configs/config.json"
# 模型使用設(shè)備,若填寫(xiě)則會(huì)覆蓋默認(rèn)配置
device: "cuda"
# 模型默認(rèn)使用的語(yǔ)言
language: "ZH"
# 模型人物默認(rèn)參數(shù)
# 不必填寫(xiě)所有人物,不填的使用默認(rèn)值
# 暫時(shí)不用填寫(xiě),當(dāng)前尚未實(shí)現(xiàn)按人區(qū)分配置
speakers:
- speaker: "科比"
sdp_ratio: 0.2
noise_scale: 0.6
noise_scale_w: 0.8
length_scale: 1
- speaker: "五條悟"
sdp_ratio: 0.3
noise_scale: 0.7
noise_scale_w: 0.8
length_scale: 0.5
- speaker: "安倍晉三"
sdp_ratio: 0.2
noise_scale: 0.6
noise_scale_w: 0.8
length_scale: 1.2
- # 模型的路徑
model: "data/models/G_15000.pth"
# 模型config.json的路徑
config: "configs/config.json"
# 模型使用設(shè)備,若填寫(xiě)則會(huì)覆蓋默認(rèn)配置
device: "gpu"
# 模型默認(rèn)使用的語(yǔ)言
language: "ZH"
2.服務(wù)代碼
創(chuàng)建服務(wù)代碼:
vim server_fastapi.py
"""
api服務(wù) 多版本多模型 fastapi實(shí)現(xiàn)
"""
import logging
import gc
import random
from pydantic import BaseModel
import gradio
import numpy as np
import utils
from fastapi import FastAPI, Query, Request
from fastapi.responses import Response, FileResponse
from fastapi.staticfiles import StaticFiles
from io import BytesIO
from scipy.io import wavfile
import uvicorn
import torch
import webbrowser
import psutil
import GPUtil
from typing import Dict, Optional, List, Set
import os
from tools.log import logger
from urllib.parse import unquote
from infer import infer, get_net_g, latest_version
import tools.translate as trans
from re_matching import cut_sent
from config import config
os.environ["TOKENIZERS_PARALLELISM"] = "false"
class Model:
"""模型封裝類"""
def __init__(self, config_path: str, model_path: str, device: str, language: str):
self.config_path: str = os.path.normpath(config_path)
self.model_path: str = os.path.normpath(model_path)
self.device: str = device
self.language: str = language
self.hps = utils.get_hparams_from_file(config_path)
self.spk2id: Dict[str, int] = self.hps.data.spk2id # spk - id 映射字典
self.id2spk: Dict[int, str] = dict() # id - spk 映射字典
for speaker, speaker_id in self.hps.data.spk2id.items():
self.id2spk[speaker_id] = speaker
self.version: str = (
self.hps.version if hasattr(self.hps, "version") else latest_version
)
self.net_g = get_net_g(
model_path=model_path,
version=self.version,
device=device,
hps=self.hps,
)
def to_dict(self) -> Dict[str, any]:
return {
"config_path": self.config_path,
"model_path": self.model_path,
"device": self.device,
"language": self.language,
"spk2id": self.spk2id,
"id2spk": self.id2spk,
"version": self.version,
}
class Models:
def __init__(self):
self.models: Dict[int, Model] = dict()
self.num = 0
# spkInfo[角色名][模型id] = 角色id
self.spk_info: Dict[str, Dict[int, int]] = dict()
self.path2ids: Dict[str, Set[int]] = dict() # 路徑指向的model的id
def init_model(
self, config_path: str, model_path: str, device: str, language: str
) -> int:
"""
初始化并添加一個(gè)模型
:param config_path: 模型config.json路徑
:param model_path: 模型路徑
:param device: 模型推理使用設(shè)備
:param language: 模型推理默認(rèn)語(yǔ)言
"""
# 若路徑中的模型已存在,則不添加模型,若不存在,則進(jìn)行初始化。
model_path = os.path.realpath(model_path)
if model_path not in self.path2ids.keys():
self.path2ids[model_path] = {self.num}
self.models[self.num] = Model(
config_path=config_path,
model_path=model_path,
device=device,
language=language,
)
logger.success(f"添加模型{model_path},使用配置文件{os.path.realpath(config_path)}")
else:
# 獲取一個(gè)指向id
m_id = next(iter(self.path2ids[model_path]))
self.models[self.num] = self.models[m_id]
self.path2ids[model_path].add(self.num)
logger.success("模型已存在,添加模型引用。")
# 添加角色信息
for speaker, speaker_id in self.models[self.num].spk2id.items():
if speaker not in self.spk_info.keys():
self.spk_info[speaker] = {self.num: speaker_id}
else:
self.spk_info[speaker][self.num] = speaker_id
# 修改計(jì)數(shù)
self.num += 1
return self.num - 1
def del_model(self, index: int) -> Optional[int]:
"""刪除對(duì)應(yīng)序號(hào)的模型,若不存在則返回None"""
if index not in self.models.keys():
return None
# 刪除角色信息
for speaker, speaker_id in self.models[index].spk2id.items():
self.spk_info[speaker].pop(index)
if len(self.spk_info[speaker]) == 0:
# 若對(duì)應(yīng)角色的所有模型都被刪除,則清除該角色信息
self.spk_info.pop(speaker)
# 刪除路徑信息
model_path = os.path.realpath(self.models[index].model_path)
self.path2ids[model_path].remove(index)
if len(self.path2ids[model_path]) == 0:
self.path2ids.pop(model_path)
logger.success(f"刪除模型{model_path}, id = {index}")
else:
logger.success(f"刪除模型引用{model_path}, id = {index}")
# 刪除模型
self.models.pop(index)
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
return index
def get_models(self):
"""獲取所有模型"""
return self.models
if __name__ == "__main__":
app = FastAPI()
app.logger = logger
# 掛載靜態(tài)文件
StaticDir: str = "./Web"
dirs = [fir.name for fir in os.scandir(StaticDir) if fir.is_dir()]
files = [fir.name for fir in os.scandir(StaticDir) if fir.is_dir()]
for dirName in dirs:
app.mount(
f"/{dirName}",
StaticFiles(directory=f"./{StaticDir}/{dirName}"),
name=dirName,
)
loaded_models = Models()
# 加載模型
models_info = config.server_config.models
for model_info in models_info:
loaded_models.init_model(
config_path=model_info["config"],
model_path=model_info["model"],
device=model_info["device"],
language=model_info["language"],
)
@app.get("/")
async def index():
return FileResponse("./Web/index.html")
class Text(BaseModel):
text: str
@app.post("/voice")
def voice(
request: Request, # fastapi自動(dòng)注入
text: Text,
model_id: int = Query(..., description="模型ID"), # 模型序號(hào)
speaker_name: str = Query(
None, description="說(shuō)話人名"
), # speaker_name與 speaker_id二者選其一
speaker_id: int = Query(None, description="說(shuō)話人id,與speaker_name二選一"),
sdp_ratio: float = Query(0.2, description="SDP/DP混合比"),
noise: float = Query(0.2, description="感情"),
noisew: float = Query(0.9, description="音素長(zhǎng)度"),
length: float = Query(1, description="語(yǔ)速"),
language: str = Query(None, description="語(yǔ)言"), # 若不指定使用語(yǔ)言則使用默認(rèn)值
auto_translate: bool = Query(False, description="自動(dòng)翻譯"),
auto_split: bool = Query(False, description="自動(dòng)切分"),
):
"""語(yǔ)音接口"""
text = text.text
logger.info(
f"{request.client.host}:{request.client.port}/voice { unquote(str(request.query_params) )} text={text}"
)
# 檢查模型是否存在
if model_id not in loaded_models.models.keys():
return {"status": 10, "detail": f"模型model_id={model_id}未加載"}
# 檢查是否提供speaker
if speaker_name is None and speaker_id is None:
return {"status": 11, "detail": "請(qǐng)?zhí)峁﹕peaker_name或speaker_id"}
elif speaker_name is None:
# 檢查speaker_id是否存在
if speaker_id not in loaded_models.models[model_id].id2spk.keys():
return {"status": 12, "detail": f"角色speaker_id={speaker_id}不存在"}
speaker_name = loaded_models.models[model_id].id2spk[speaker_id]
# 檢查speaker_name是否存在
if speaker_name not in loaded_models.models[model_id].spk2id.keys():
return {"status": 13, "detail": f"角色speaker_name={speaker_name}不存在"}
if language is None:
language = loaded_models.models[model_id].language
if auto_translate:
text = trans.translate(Sentence=text, to_Language=language.lower())
if not auto_split:
with torch.no_grad():
audio = infer(
text=text,
emotion=None,
sdp_ratio=sdp_ratio,
noise_scale=noise,
noise_scale_w=noisew,
length_scale=length,
sid=speaker_name,
language=language,
hps=loaded_models.models[model_id].hps,
net_g=loaded_models.models[model_id].net_g,
device=loaded_models.models[model_id].device,
)
else:
texts = cut_sent(text)
audios = []
with torch.no_grad():
for t in texts:
audios.append(
infer(
text=t,
sdp_ratio=sdp_ratio,
noise_scale=noise,
noise_scale_w=noisew,
length_scale=length,
sid=speaker_name,
language=language,
hps=loaded_models.models[model_id].hps,
net_g=loaded_models.models[model_id].net_g,
device=loaded_models.models[model_id].device,
)
)
audios.append(np.zeros((int)(44100 * 0.3)))
audio = np.concatenate(audios)
audio = gradio.processing_utils.convert_to_16_bit_wav(audio)
wavContent = BytesIO()
wavfile.write(
wavContent, loaded_models.models[model_id].hps.data.sampling_rate, audio
)
response = Response(content=wavContent.getvalue(), media_type="audio/wav")
return response
@app.get("/voice")
def voice(
request: Request, # fastapi自動(dòng)注入
text: str = Query(..., description="輸入文字"),
model_id: int = Query(..., description="模型ID"), # 模型序號(hào)
speaker_name: str = Query(
None, description="說(shuō)話人名"
), # speaker_name與 speaker_id二者選其一
speaker_id: int = Query(None, description="說(shuō)話人id,與speaker_name二選一"),
sdp_ratio: float = Query(0.2, description="SDP/DP混合比"),
noise: float = Query(0.2, description="感情"),
noisew: float = Query(0.9, description="音素長(zhǎng)度"),
length: float = Query(1, description="語(yǔ)速"),
language: str = Query(None, description="語(yǔ)言"), # 若不指定使用語(yǔ)言則使用默認(rèn)值
auto_translate: bool = Query(False, description="自動(dòng)翻譯"),
auto_split: bool = Query(False, description="自動(dòng)切分"),
):
"""語(yǔ)音接口"""
logger.info(
f"{request.client.host}:{request.client.port}/voice { unquote(str(request.query_params) )}"
)
# 檢查模型是否存在
if model_id not in loaded_models.models.keys():
return {"status": 10, "detail": f"模型model_id={model_id}未加載"}
# 檢查是否提供speaker
if speaker_name is None and speaker_id is None:
return {"status": 11, "detail": "請(qǐng)?zhí)峁﹕peaker_name或speaker_id"}
elif speaker_name is None:
# 檢查speaker_id是否存在
if speaker_id not in loaded_models.models[model_id].id2spk.keys():
return {"status": 12, "detail": f"角色speaker_id={speaker_id}不存在"}
speaker_name = loaded_models.models[model_id].id2spk[speaker_id]
# 檢查speaker_name是否存在
if speaker_name not in loaded_models.models[model_id].spk2id.keys():
return {"status": 13, "detail": f"角色speaker_name={speaker_name}不存在"}
if language is None:
language = loaded_models.models[model_id].language
if auto_translate:
text = trans.translate(Sentence=text, to_Language=language.lower())
if not auto_split:
with torch.no_grad():
audio = infer(
text=text,
emotion=None,
sdp_ratio=sdp_ratio,
noise_scale=noise,
noise_scale_w=noisew,
length_scale=length,
sid=speaker_name,
language=language,
hps=loaded_models.models[model_id].hps,
net_g=loaded_models.models[model_id].net_g,
device=loaded_models.models[model_id].device,
)
else:
texts = cut_sent(text)
audios = []
with torch.no_grad():
for t in texts:
audios.append(
infer(
text=t,
sdp_ratio=sdp_ratio,
noise_scale=noise,
noise_scale_w=noisew,
length_scale=length,
sid=speaker_name,
language=language,
hps=loaded_models.models[model_id].hps,
net_g=loaded_models.models[model_id].net_g,
device=loaded_models.models[model_id].device,
)
)
audios.append(np.zeros((int)(44100 * 0.3)))
audio = np.concatenate(audios)
audio = gradio.processing_utils.convert_to_16_bit_wav(audio)
wavContent = BytesIO()
wavfile.write(
wavContent, loaded_models.models[model_id].hps.data.sampling_rate, audio
)
response = Response(content=wavContent.getvalue(), media_type="audio/wav")
return response
@app.get("/models/info")
def get_loaded_models_info(request: Request):
"""獲取已加載模型信息"""
result: Dict[str, Dict] = dict()
for key, model in loaded_models.models.items():
result[str(key)] = model.to_dict()
return result
@app.get("/models/delete")
def delete_model(
request: Request, model_id: int = Query(..., description="刪除模型id")
):
"""刪除指定模型"""
logger.info(
f"{request.client.host}:{request.client.port}/models/delete { unquote(str(request.query_params) )}"
)
result = loaded_models.del_model(model_id)
if result is None:
return {"status": 14, "detail": f"模型{model_id}不存在,刪除失敗"}
return {"status": 0, "detail": "刪除成功"}
@app.get("/models/add")
def add_model(
request: Request,
model_path: str = Query(..., description="添加模型路徑"),
config_path: str = Query(
None, description="添加模型配置文件路徑,不填則使用./config.json或../config.json"
),
device: str = Query("cuda", description="推理使用設(shè)備"),
language: str = Query("ZH", description="模型默認(rèn)語(yǔ)言"),
):
"""添加指定模型:允許重復(fù)添加相同路徑模型,且不重復(fù)占用內(nèi)存"""
logger.info(
f"{request.client.host}:{request.client.port}/models/add { unquote(str(request.query_params) )}"
)
if config_path is None:
model_dir = os.path.dirname(model_path)
if os.path.isfile(os.path.join(model_dir, "config.json")):
config_path = os.path.join(model_dir, "config.json")
elif os.path.isfile(os.path.join(model_dir, "../config.json")):
config_path = os.path.join(model_dir, "../config.json")
else:
return {
"status": 15,
"detail": "查詢未傳入配置文件路徑,同時(shí)默認(rèn)路徑./與../中不存在配置文件config.json。",
}
try:
model_id = loaded_models.init_model(
config_path=config_path,
model_path=model_path,
device=device,
language=language,
)
except Exception:
logging.exception("模型加載出錯(cuò)")
return {
"status": 16,
"detail": "模型加載出錯(cuò),詳細(xì)查看日志",
}
return {
"status": 0,
"detail": "模型添加成功",
"Data": {
"model_id": model_id,
"model_info": loaded_models.models[model_id].to_dict(),
},
}
def _get_all_models(root_dir: str = "Data", only_unloaded: bool = False):
"""從root_dir搜索獲取所有可用模型"""
result: Dict[str, List[str]] = dict()
files = os.listdir(root_dir) + ["."]
for file in files:
if os.path.isdir(os.path.join(root_dir, file)):
sub_dir = os.path.join(root_dir, file)
# 搜索 "sub_dir" 、 "sub_dir/models" 兩個(gè)路徑
result[file] = list()
sub_files = os.listdir(sub_dir)
model_files = []
for sub_file in sub_files:
relpath = os.path.realpath(os.path.join(sub_dir, sub_file))
if only_unloaded and relpath in loaded_models.path2ids.keys():
continue
if sub_file.endswith(".pth") and sub_file.startswith("G_"):
if os.path.isfile(relpath):
model_files.append(sub_file)
# 對(duì)模型文件按步數(shù)排序
model_files = sorted(
model_files,
key=lambda pth: int(pth.lstrip("G_").rstrip(".pth"))
if pth.lstrip("G_").rstrip(".pth").isdigit()
else 10**10,
)
result[file] = model_files
models_dir = os.path.join(sub_dir, "models")
model_files = []
if os.path.isdir(models_dir):
sub_files = os.listdir(models_dir)
for sub_file in sub_files:
relpath = os.path.realpath(os.path.join(models_dir, sub_file))
if only_unloaded and relpath in loaded_models.path2ids.keys():
continue
if sub_file.endswith(".pth") and sub_file.startswith("G_"):
if os.path.isfile(os.path.join(models_dir, sub_file)):
model_files.append(f"models/{sub_file}")
# 對(duì)模型文件按步數(shù)排序
model_files = sorted(
model_files,
key=lambda pth: int(pth.lstrip("models/G_").rstrip(".pth"))
if pth.lstrip("models/G_").rstrip(".pth").isdigit()
else 10**10,
)
result[file] += model_files
if len(result[file]) == 0:
result.pop(file)
return result
@app.get("/models/get_unloaded")
def get_unloaded_models_info(
request: Request, root_dir: str = Query("Data", description="搜索根目錄")
):
"""獲取未加載模型"""
logger.info(
f"{request.client.host}:{request.client.port}/models/get_unloaded { unquote(str(request.query_params) )}"
)
return _get_all_models(root_dir, only_unloaded=True)
@app.get("/models/get_local")
def get_local_models_info(
request: Request, root_dir: str = Query("Data", description="搜索根目錄")
):
"""獲取全部本地模型"""
logger.info(
f"{request.client.host}:{request.client.port}/models/get_local { unquote(str(request.query_params) )}"
)
return _get_all_models(root_dir, only_unloaded=False)
@app.get("/status")
def get_status():
"""獲取電腦運(yùn)行狀態(tài)"""
cpu_percent = psutil.cpu_percent(interval=1)
memory_info = psutil.virtual_memory()
memory_total = memory_info.total
memory_available = memory_info.available
memory_used = memory_info.used
memory_percent = memory_info.percent
gpuInfo = []
devices = ["cpu"]
for i in range(torch.cuda.device_count()):
devices.append(f"cuda:{i}")
gpus = GPUtil.getGPUs()
for gpu in gpus:
gpuInfo.append(
{
"gpu_id": gpu.id,
"gpu_load": gpu.load,
"gpu_memory": {
"total": gpu.memoryTotal,
"used": gpu.memoryUsed,
"free": gpu.memoryFree,
},
}
)
return {
"devices": devices,
"cpu_percent": cpu_percent,
"memory_total": memory_total,
"memory_available": memory_available,
"memory_used": memory_used,
"memory_percent": memory_percent,
"gpu": gpuInfo,
}
@app.get("/tools/translate")
def translate(
request: Request,
texts: str = Query(..., description="待翻譯文本"),
to_language: str = Query(..., description="翻譯目標(biāo)語(yǔ)言"),
):
"""翻譯"""
logger.info(
f"{request.client.host}:{request.client.port}/tools/translate { unquote(str(request.query_params) )}"
)
return {"texts": trans.translate(Sentence=texts, to_Language=to_language)}
all_examples: Dict[str, Dict[str, List]] = dict() # 存放示例
@app.get("/tools/random_example")
def random_example(
request: Request,
language: str = Query(None, description="指定語(yǔ)言,未指定則隨機(jī)返回"),
root_dir: str = Query("Data", description="搜索根目錄"),
):
"""
獲取一個(gè)隨機(jī)音頻+文本,用于對(duì)比,音頻會(huì)從本地目錄隨機(jī)選擇。
"""
logger.info(
f"{request.client.host}:{request.client.port}/tools/random_example { unquote(str(request.query_params) )}"
)
global all_examples
# 數(shù)據(jù)初始化
if root_dir not in all_examples.keys():
all_examples[root_dir] = {"ZH": [], "JP": [], "EN": []}
examples = all_examples[root_dir]
# 從項(xiàng)目Data目錄中搜索train/val.list
for root, directories, _files in os.walk(root_dir):
for file in _files:
if file in ["train.list", "val.list"]:
with open(
os.path.join(root, file), mode="r", encoding="utf-8"
) as f:
lines = f.readlines()
for line in lines:
data = line.split("|")
if len(data) != 7:
continue
# 音頻存在 且語(yǔ)言為ZH/EN/JP
if os.path.isfile(data[0]) and data[2] in [
"ZH",
"JP",
"EN",
]:
examples[data[2]].append(
{
"text": data[3],
"audio": data[0],
"speaker": data[1],
}
)
examples = all_examples[root_dir]
if language is None:
if len(examples["ZH"]) + len(examples["JP"]) + len(examples["EN"]) == 0:
return {"status": 17, "detail": "沒(méi)有加載任何示例數(shù)據(jù)"}
else:
# 隨機(jī)選一個(gè)
rand_num = random.randint(
0,
len(examples["ZH"]) + len(examples["JP"]) + len(examples["EN"]) - 1,
)
# ZH
if rand_num < len(examples["ZH"]):
return {"status": 0, "Data": examples["ZH"][rand_num]}
# JP
if rand_num < len(examples["ZH"]) + len(examples["JP"]):
return {
"status": 0,
"Data": examples["JP"][rand_num - len(examples["ZH"])],
}
# EN
return {
"status": 0,
"Data": examples["EN"][
rand_num - len(examples["ZH"]) - len(examples["JP"])
],
}
else:
if len(examples[language]) == 0:
return {"status": 17, "detail": f"沒(méi)有加載任何{language}數(shù)據(jù)"}
return {
"status": 0,
"Data": examples[language][
random.randint(0, len(examples[language]) - 1)
],
}
@app.get("/tools/get_audio")
def get_audio(request: Request, path: str = Query(..., description="本地音頻路徑")):
logger.info(
f"{request.client.host}:{request.client.port}/tools/get_audio { unquote(str(request.query_params) )}"
)
if not os.path.isfile(path):
return {"status": 18, "detail": "指定音頻不存在"}
if not path.endswith(".wav"):
return {"status": 19, "detail": "非wav格式文件"}
return FileResponse(path=path)
server_ip="1.1.1.1"
logger.warning("本地服務(wù),請(qǐng)勿將服務(wù)端口暴露于外網(wǎng)")
logger.info(f"api文檔地址 http://{server_ip}:{config.server_config.port}/docs")
webbrowser.open(f"http://{server_ip}:{config.server_config.port}")
uvicorn.run(
app, port=config.server_config.port, host=server_ip, log_level="warning"
)
這里代碼很長(zhǎng),但我們只需要修改結(jié)尾處的 server_ip 即可。而真正對(duì)應(yīng)推理的在代碼的 import 處,我們可以查看目錄下的 infer.py 內(nèi)的 infer 函數(shù)關(guān)注具體的推理流程:
from infer import infer, get_net_g, latest_version
3.服務(wù)踩坑
◆ NLTK Not Found
我們需要到 NLTK 的官方 github 代碼庫(kù)下載,下載地址:?https://github.com/nltk/nltk_data
下載后把 packages 文件夾更名為 nltk_data,放置到上面 Searched in 的任一個(gè)目錄下即可。
◆ No Such File or Dir
server 代碼需要建立一個(gè)默認(rèn)的 Web 文件夾,否則會(huì)報(bào)錯(cuò):
mkdir Web
◆ Missing Argument
audio = infer(
TypeError: infer() missing 1 required positional argument: 'emotion'
VITS2 社區(qū)的更新比較頻繁,最近在 Infer 的參數(shù)中新增了 emotion 的參數(shù),我們這里直接偷懶 Pass 了,傳參為 None,如果大家有 emotion 的需求,也可以在 infer 相關(guān)代碼里研究下:
三.服務(wù)使用
1.服務(wù)啟動(dòng)
nohup python server_fastapi.py > log 2>&1 &
直接后臺(tái)啟動(dòng)即可,得到如下日志代表啟動(dòng)成功:
這里模型我們配置中保留最近的 8 個(gè) Checkpoint,?可以嘗試不同步數(shù)的 CK 填寫(xiě)的 config.yml:
2.服務(wù)調(diào)用
FastAPI 服務(wù)對(duì)應(yīng)的 url 根據(jù) server_fastapi.py 的 ip 和 config.yml 內(nèi)的 port 決定:
url=${ip}:${port} => 1.1.1.1:9876
◆ Get Voice
修改下面的 URL 對(duì)應(yīng)我們的 ip 與 port,隨后 Http get,Params 需傳入我們對(duì)應(yīng)的角色以及音頻的參數(shù)配置。
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import datetime
def get(typ, output, params={}):
url = "http://$ip:$port"
url_type = url + typ
if params.keys() == 0:
response = requests.get(url_type)
else:
response = requests.get(url_type, params=params)
if response.status_code == 200:
print('成功獲取!')
if typ == "/voice":
with open(f'{output}.mp3', 'wb') as f: # 將音頻文件寫(xiě)入到“目標(biāo)音樂(lè).mp3”中
f.write(response.content)
elif typ == "/models/info":
data = response.text
print("data:", data)
else:
print('請(qǐng)求失敗,狀態(tài)碼:', response.status_code)
◆ Main
names 可以對(duì)照前面訓(xùn)練數(shù)據(jù)處理時(shí)傳入的 person 名稱,根據(jù)不同的 name,構(gòu)建 json 調(diào)用 voice 接口,text 傳文字,output 傳音頻輸出地址。
def getMp3(text, output):
names = ["swk"]
for name in names:
prams = {
'model_id': 0,
'text': text,
'speaker_name': name,
'language': 'ZH',
'length': 1.0,
'sdp_ratio': 0.5,
'noise': 0.1
}
get("/voice", output=output, params=prams)
if __name__ == '__main__':
time_now = datetime.datetime.now().strftime("%Y%m%d%H%M")
print(time_now)
getMp3("妖孽,吃俺老孫一棒!", "swk")
3.服務(wù)結(jié)果
調(diào)用后得到我們對(duì)應(yīng) output 的 mp3 結(jié)果,這里無(wú)法上傳語(yǔ)音,大家可以自行測(cè)試聽(tīng)聽(tīng)效果。由于是語(yǔ)音生成,難免存在一些噪聲,大家有興趣也可以在服務(wù)后面添加噪聲處理的邏輯。文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-783036.html
四.總結(jié)
結(jié)合上文的訓(xùn)練流程,我們現(xiàn)在實(shí)現(xiàn)了自定義語(yǔ)音的訓(xùn)練到推理到服務(wù)的完整鏈路。整體來(lái)說(shuō)音色還是比較相似的,由于訓(xùn)練音頻的原因 G 生成器生成的音頻可能存在噪聲,也可以在生成 mp3 后再進(jìn)行一道去噪的流程,優(yōu)化整體語(yǔ)音質(zhì)量。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-783036.html
到了這里,關(guān)于Python - Bert-VITS2 語(yǔ)音推理服務(wù)部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!