国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

南洋才女,德藝雙馨,孫燕姿本尊回應(yīng)AI孫燕姿(基于Sadtalker/Python3.10)

這篇具有很好參考價(jià)值的文章主要介紹了南洋才女,德藝雙馨,孫燕姿本尊回應(yīng)AI孫燕姿(基于Sadtalker/Python3.10)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

孫燕姿果然不愧是孫燕姿,不愧為南洋理工大學(xué)的高材生,近日她在個(gè)人官方媒體博客上寫了一篇英文版的長文,正式回應(yīng)現(xiàn)在滿城風(fēng)雨的“AI孫燕姿”現(xiàn)象,流行天后展示了超人一等的智識(shí)水平,行文優(yōu)美,綿恒雋永,對(duì)AIGC藝術(shù)表現(xiàn)得極其克制,又相當(dāng)寬容,充滿了語言上的古典之美,表現(xiàn)出了“任彼如泰山壓頂,我只當(dāng)清風(fēng)拂面”的博大胸懷。

本次我們利用edge-tts和Sadtalker庫讓AI孫燕姿朗誦本尊的博文,讓流行天后念給你聽。

Sadtalker配置

之前我們?cè)?jīng)使用百度開源的PaddleGAN視覺效果模型中一個(gè)子模塊Wav2lip實(shí)現(xiàn)了人物口型與輸入的歌詞語音同步,但Wav2lip的問題是虛擬人物的動(dòng)態(tài)效果只能局限在嘴唇附近,事實(shí)上,音頻和不同面部動(dòng)作之間的連接是不同的,也就是說,雖然嘴唇運(yùn)動(dòng)與音頻的聯(lián)系最強(qiáng),但可以通過不同的頭部姿勢和眨眼來反作用于音頻。

和Wav2lip相比,SadTaker是一種通過隱式3D系數(shù)調(diào)制的風(fēng)格化音頻驅(qū)動(dòng)Talking頭部視頻生成的庫,一方面,它從音頻中生成逼真的運(yùn)動(dòng)系數(shù)(例如,頭部姿勢、嘴唇運(yùn)動(dòng)和眨眼),并單獨(dú)學(xué)習(xí)每個(gè)運(yùn)動(dòng)以減少不確定性。對(duì)于表達(dá),通過從的僅嘴唇運(yùn)動(dòng)系數(shù)和重建的渲染三維人臉上的感知損失(唇讀損失,面部landmark loss)中提取系數(shù),設(shè)計(jì)了一種新的音頻到表達(dá)系數(shù)網(wǎng)絡(luò)。

對(duì)于程序化的頭部姿勢,通過學(xué)習(xí)給定姿勢的殘差,使用條件VAE來對(duì)多樣性和逼真的頭部運(yùn)動(dòng)進(jìn)行建模。在生成逼真的3DMM系數(shù)后,通過一種新穎的3D感知人臉渲染來驅(qū)動(dòng)源圖像。并且通過源和驅(qū)動(dòng)的無監(jiān)督3D關(guān)鍵點(diǎn)生成扭曲場,并扭曲參考圖像以生成最終視頻。

Sadtalker可以單獨(dú)配置,也可以作為Stable-Diffusion-Webui的插件而存在,這里推薦使用Stable-Diffusion插件的形式,因?yàn)檫@樣Stable-Diffusion和Sadtalker可以共用一套WebUI的界面,更方便將Stable-Diffusion生成的圖片做成動(dòng)態(tài)效果。

進(jìn)入到Stable-Diffusion的項(xiàng)目目錄:

cd stable-diffusion-webui

啟動(dòng)服務(wù):

python3.10 webui.py

程序返回:

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]  
Version: v1.3.0  
Commit hash: 20ae71faa8ef035c31aa3a410b707d792c8203a3  
Installing requirements  
Launching Web UI with arguments: --xformers --opt-sdp-attention --api --lowvram  
Loading weights [b4d453442a] from D:\work\stable-diffusion-webui\models\Stable-diffusion\protogenV22Anime_protogenV22.safetensors  
load Sadtalker Checkpoints from D:\work\stable-diffusion-webui\extensions\SadTalker\checkpoints  
Creating model from config: D:\work\stable-diffusion-webui\configs\v1-inference.yaml  
LatentDiffusion: Running in eps-prediction mode  
DiffusionWrapper has 859.52 M params.  
Running on local URL:  http://127.0.0.1:7860

代表啟動(dòng)成功,隨后http://localhost:7860

選擇插件(Extensions)選項(xiàng)卡

點(diǎn)擊從url安裝,輸入插件地址:github.com/Winfredy/SadTalker

安裝成功后,重啟WebUI界面。

接著需要手動(dòng)下載相關(guān)的模型文件:

https://pan.baidu.com/s/1nXuVNd0exUl37ISwWqbFGA?pwd=sadt

隨后將模型文件放入項(xiàng)目的stable-diffusion-webui/extensions/SadTalker/checkpoints/目錄即可。

接著配置一下模型目錄的環(huán)境變量:

set SADTALKER_CHECKPOINTS=D:/stable-diffusion-webui/extensions/SadTalker/checkpoints/

至此,SadTalker就配置好了。

edge-tts音頻轉(zhuǎn)錄

之前的歌曲復(fù)刻是通過So-vits庫對(duì)原歌曲的音色進(jìn)行替換和預(yù)測,也就是說需要原版的歌曲作為基礎(chǔ)數(shù)據(jù)。但目前的場景顯然有別于歌曲替換,我們首先需要將文本轉(zhuǎn)換為語音,才能替換音色。

這里使用edge-tts庫進(jìn)行文本轉(zhuǎn)語音操作:

import asyncio  
  
import edge_tts  
  
TEXT = '''  
  
As my AI voice takes on a life of its own while I despair over my overhanging stomach and my children's every damn thing, I can't help but want to write something about it.  
  
My fans have officially switched sides and accepted that I am indeed 冷門歌手 while my AI persona is the current hot property. I mean really, how do you fight with someone who is putting out new albums in the time span of minutes.  
  
Whether it is ChatGPT or AI or whatever name you want to call it, this "thing" is now capable of mimicking and/or conjuring,  unique and complicated content by processing a gazillion chunks of information while piecing and putting together in a most coherent manner the task being asked at hand. Wait a minute, isn't that what humans do? The very task that we have always convinced ourselves; that the formation of thought or opinion is not replicable by robots, the very idea that this is beyond their league, is now the looming thing that will threaten thousands of human conjured jobs. Legal, medical, accountancy, and currently, singing a song.   
  
You will protest, well I can tell the difference, there is no emotion or variance in tone/breath or whatever technical jargon you can come up with. Sorry to say, I suspect that this would be a very short term response.  
  
Ironically, in no time at all, no human will be able to rise above that. No human will be able to have access to this amount of information AND make the right calls OR make the right mistakes (ok mayyyybe I'm jumping ahead). This new technology will be able to churn out what exactly EVERYTHING EVERYONE  needs. As indie or as warped or as psychotic as you can get, there's probably a unique content that could be created just for you. You are not special you are already predictable and also unfortunately malleable.  
  
At this point, I feel like a popcorn eater with the best seat in the theatre. (Sidenote: Quite possibly in this case no tech is able to predict what it's like to be me, except when this is published then ok it's free for all). It's like watching that movie that changed alot of our lives Everything Everywhere All At Once, except in this case, I don't think it will be the idea of love that will save the day.   
  
In this boundless sea of existence, where anything is possible, where nothing matters, I think it will be purity of thought, that being exactly who you are will be enough.   
  
With this I fare thee well.  
  
'''  
  
VOICE = "en-HK-YanNeural"  
OUTPUT_FILE = "./test_en1.mp3"  
  
  
async def _main() -> None:  
    communicate = edge_tts.Communicate(TEXT, VOICE)  
    await communicate.save(OUTPUT_FILE)  
  
  
if __name__ == "__main__":  
    asyncio.run(_main())

音頻使用英文版本的女聲:en-HK-YanNeural,關(guān)于edge-tts,請(qǐng)移步:口播神器,基于Edge,微軟TTS(text-to-speech)文字轉(zhuǎn)語音免費(fèi)開源庫edge-tts語音合成實(shí)踐(Python3.10),這里不再贅述。

隨后再將音頻文件的音色替換為AI孫燕姿的音色即可:AI天后,在線飆歌,人工智能AI孫燕姿模型應(yīng)用實(shí)踐,復(fù)刻《遙遠(yuǎn)的歌》,原唱晴子(Python3.10)。

本地推理和爆顯存問題

準(zhǔn)備好生成的圖片以及音頻文件后,就可以在本地進(jìn)行推理操作了,訪問 localhost:7860

這里輸入?yún)?shù)選擇full,如此會(huì)保留整個(gè)圖片區(qū)域,否則只保留頭部部分。

生成效果:

SadTalker會(huì)根據(jù)音頻文件生成對(duì)應(yīng)的口型和表情。

這里需要注意的是,音頻文件只支持MP3或者wav。

除此以外,推理過程中Pytorch庫可能會(huì)報(bào)這個(gè)錯(cuò)誤:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.38 GiB already allocated; 0 bytes free; 5.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

這就是所謂的"爆顯存問題"。

一般情況下,是因?yàn)楫?dāng)前GPU的顯存不夠了所導(dǎo)致的,可以考慮縮小torch分片文件的體積:

set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:60

如果音頻文件實(shí)在過大,也可以通過ffmpeg對(duì)音頻文件切片操作,分多次進(jìn)行推理:

ffmpeg -ss 00:00:00 -i test_en.wav -to 00:30:00 -c copy test_en_01.wav

藉此,就解決了推理過程中的爆顯存問題。

結(jié)語

和Wav2Lip相比,SadTalker(Stylized Audio-Driven Talking-head)提供了更加細(xì)微的面部運(yùn)動(dòng)細(xì)節(jié)(如眼睛眨動(dòng))等等,可謂是細(xì)致入微,巨細(xì)靡遺,當(dāng)然隨之而來的是模型數(shù)量和推理成本以及推理時(shí)間的增加,但顯然,這些都是值得的。文章來源地址http://www.zghlxwxcb.cn/news/detail-474311.html

到了這里,關(guān)于南洋才女,德藝雙馨,孫燕姿本尊回應(yīng)AI孫燕姿(基于Sadtalker/Python3.10)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包