孫燕姿果然不愧是孫燕姿,不愧為南洋理工大學(xué)的高材生,近日她在個人官方媒體博客上寫了一篇英文版的長文,正式回應(yīng)現(xiàn)在滿城風(fēng)雨的“AI孫燕姿”現(xiàn)象,流行天后展示了超人一等的智識水平,行文優(yōu)美,綿恒雋永,對AIGC藝術(shù)表現(xiàn)得極其克制,又相當寬容,充滿了語言上的古典之美,表現(xiàn)出了“任彼如泰山壓頂,我只當清風(fēng)拂面”的博大胸懷。
本次我們利用edge-tts和Sadtalker庫讓AI孫燕姿朗誦本尊的博文,讓流行天后念給你聽。
Sadtalker配置之前我們曾經(jīng)使用百度開源的PaddleGAN視覺效果模型中一個子模塊Wav2lip實現(xiàn)了人物口型與輸入的歌詞語音同步,但Wav2lip的問題是虛擬人物的動態(tài)效果只能局限在嘴唇附近,事實上,音頻和不同面部動作之間的連接是不同的,也就是說,雖然嘴唇運動與音頻的聯(lián)系最強,但可以通過不同的頭部姿勢和眨眼來反作用于音頻。
【資料圖】
和Wav2lip相比,SadTaker是一種通過隱式3D系數(shù)調(diào)制的風(fēng)格化音頻驅(qū)動Talking頭部視頻生成的庫,一方面,它從音頻中生成逼真的運動系數(shù)(例如,頭部姿勢、嘴唇運動和眨眼),并單獨學(xué)習(xí)每個運動以減少不確定性。對于表達,通過從的僅嘴唇運動系數(shù)和重建的渲染三維人臉上的感知損失(唇讀損失,面部landmark loss)中提取系數(shù),設(shè)計了一種新的音頻到表達系數(shù)網(wǎng)絡(luò)。
對于程序化的頭部姿勢,通過學(xué)習(xí)給定姿勢的殘差,使用條件VAE來對多樣性和逼真的頭部運動進行建模。在生成逼真的3DMM系數(shù)后,通過一種新穎的3D感知人臉渲染來驅(qū)動源圖像。并且通過源和驅(qū)動的無監(jiān)督3D關(guān)鍵點生成扭曲場,并扭曲參考圖像以生成最終視頻。
Sadtalker可以單獨配置,也可以作為Stable-Diffusion-Webui的插件而存在,這里推薦使用Stable-Diffusion插件的形式,因為這樣Stable-Diffusion和Sadtalker可以共用一套WebUI的界面,更方便將Stable-Diffusion生成的圖片做成動態(tài)效果。
進入到Stable-Diffusion的項目目錄:
cd stable-diffusion-webui
啟動服務(wù):
python3.10 webui.py
程序返回:
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.3.0 Commit hash: 20ae71faa8ef035c31aa3a410b707d792c8203a3 Installing requirements Launching Web UI with arguments: --xformers --opt-sdp-attention --api --lowvram Loading weights [b4d453442a] from D:\work\stable-diffusion-webui\models\Stable-diffusion\protogenV22Anime_protogenV22.safetensors load Sadtalker Checkpoints from D:\work\stable-diffusion-webui\extensions\SadTalker\checkpoints Creating model from config: D:\work\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Running on local URL: http://127.0.0.1:7860
代表啟動成功,隨后http://localhost:7860
選擇插件(Extensions)選項卡
點擊從url安裝,輸入插件地址:github.com/Winfredy/SadTalker
安裝成功后,重啟WebUI界面。
接著需要手動下載相關(guān)的模型文件:
https://pan.baidu.com/s/1nXuVNd0exUl37ISwWqbFGA?pwd=sadt
隨后將模型文件放入項目的stable-diffusion-webui/extensions/SadTalker/checkpoints/目錄即可。
接著配置一下模型目錄的環(huán)境變量:
set SADTALKER_CHECKPOINTS=D:/stable-diffusion-webui/extensions/SadTalker/checkpoints/
至此,SadTalker就配置好了。
edge-tts音頻轉(zhuǎn)錄之前的歌曲復(fù)刻是通過So-vits庫對原歌曲的音色進行替換和預(yù)測,也就是說需要原版的歌曲作為基礎(chǔ)數(shù)據(jù)。但目前的場景顯然有別于歌曲替換,我們首先需要將文本轉(zhuǎn)換為語音,才能替換音色。
這里使用edge-tts庫進行文本轉(zhuǎn)語音操作:
import asyncio import edge_tts TEXT = """ As my AI voice takes on a life of its own while I despair over my overhanging stomach and my children"s every damn thing, I can"t help but want to write something about it. My fans have officially switched sides and accepted that I am indeed 冷門歌手 while my AI persona is the current hot property. I mean really, how do you fight with someone who is putting out new albums in the time span of minutes. Whether it is ChatGPT or AI or whatever name you want to call it, this "thing" is now capable of mimicking and/or conjuring, unique and complicated content by processing a gazillion chunks of information while piecing and putting together in a most coherent manner the task being asked at hand. Wait a minute, isn"t that what humans do? The very task that we have always convinced ourselves; that the formation of thought or opinion is not replicable by robots, the very idea that this is beyond their league, is now the looming thing that will threaten thousands of human conjured jobs. Legal, medical, accountancy, and currently, singing a song. You will protest, well I can tell the difference, there is no emotion or variance in tone/breath or whatever technical jargon you can come up with. Sorry to say, I suspect that this would be a very short term response. Ironically, in no time at all, no human will be able to rise above that. No human will be able to have access to this amount of information AND make the right calls OR make the right mistakes (ok mayyyybe I"m jumping ahead). This new technology will be able to churn out what exactly EVERYTHING EVERYONE needs. As indie or as warped or as psychotic as you can get, there"s probably a unique content that could be created just for you. You are not special you are already predictable and also unfortunately malleable. At this point, I feel like a popcorn eater with the best seat in the theatre. (Sidenote: Quite possibly in this case no tech is able to predict what it"s like to be me, except when this is published then ok it"s free for all). It"s like watching that movie that changed alot of our lives Everything Everywhere All At Once, except in this case, I don"t think it will be the idea of love that will save the day. In this boundless sea of existence, where anything is possible, where nothing matters, I think it will be purity of thought, that being exactly who you are will be enough. With this I fare thee well. """ VOICE = "en-HK-YanNeural" OUTPUT_FILE = "./test_en1.mp3" async def _main() -> None: communicate = edge_tts.Communicate(TEXT, VOICE) await communicate.save(OUTPUT_FILE) if __name__ == "__main__": asyncio.run(_main())
音頻使用英文版本的女聲:en-HK-YanNeural,關(guān)于edge-tts,請移步:口播神器,基于Edge,微軟TTS(text-to-speech)文字轉(zhuǎn)語音免費開源庫edge-tts語音合成實踐(Python3.10),這里不再贅述。
隨后再將音頻文件的音色替換為AI孫燕姿的音色即可:AI天后,在線飆歌,人工智能AI孫燕姿模型應(yīng)用實踐,復(fù)刻《遙遠的歌》,原唱晴子(Python3.10)。
本地推理和爆顯存問題準備好生成的圖片以及音頻文件后,就可以在本地進行推理操作了,訪問 localhost:7860
這里輸入?yún)?shù)選擇full,如此會保留整個圖片區(qū)域,否則只保留頭部部分。
生成效果:
SadTalker會根據(jù)音頻文件生成對應(yīng)的口型和表情。
這里需要注意的是,音頻文件只支持MP3或者wav。
除此以外,推理過程中Pytorch庫可能會報這個錯誤:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.38 GiB already allocated; 0 bytes free; 5.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
這就是所謂的"爆顯存問題"。
一般情況下,是因為當前GPU的顯存不夠了所導(dǎo)致的,可以考慮縮小torch分片文件的體積:
set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:60
如果音頻文件實在過大,也可以通過ffmpeg對音頻文件切片操作,分多次進行推理:
ffmpeg -ss 00:00:00 -i test_en.wav -to 00:30:00 -c copy test_en_01.wav
藉此,就解決了推理過程中的爆顯存問題。
結(jié)語和Wav2Lip相比,SadTalker(Stylized Audio-Driven Talking-head)提供了更加細微的面部運動細節(jié)(如眼睛眨動)等等,可謂是細致入微,巨細靡遺,當然隨之而來的是模型數(shù)量和推理成本以及推理時間的增加,但顯然,這些都是值得的。
關(guān)于我們| 聯(lián)系方式| 版權(quán)聲明| 供稿服務(wù)| 友情鏈接
咕嚕網(wǎng) 93dn.com 版權(quán)所有,未經(jīng)書面授權(quán)禁止使用
Copyright©2008-2023 By All Rights Reserved 皖I(lǐng)CP備2022009963號-10
聯(lián)系我們: 39 60 29 14 2@qq.com