经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 程序设计 » Python3 » 查看文章
Bert-vits2-2.3-Final,Bert-vits2最终版一键整合包(复刻生化危机艾达王)
来源:cnblogs  作者:刘悦的技术博客  时间:2023/12/22 16:20:09  对本文有异议

近日,Bert-vits2发布了最新的版本2.3-final,意为最终版,修复了一些已知的bug,添加基于 WavLM 的 Discriminator(来源于 StyleTTS2),令人意外的是,因情感控制效果不佳,去除了 CLAP情感模型,换成了相对简单的 BERT 融合语义方式。

事实上,经过2.2版本的测试,CLAP情感模型的效果还是不错的,关于2.2版本,请移步:

Bert-vits2-v2.2新版本本地训练推理整合包(原神八重神子英文模型miko)

更多情报请关注Bert-vits2官网:

  1. https://github.com/fishaudio/Bert-VITS2/releases/tag/v2.3

本次我们基于最新版Bert-vits2-2.3来复刻生化危机经典角色艾达王(ada wong)的声音。

Bert-vits2-2.3项目配置

首先克隆项目:

  1. git clone https://github.com/v3ucn/Bert-vits2-V2.3.git

注意该项目fork自Bert-vits2的2.3分支,在其基础上增加了素材切分和转写标注等功能,更易于使用。

随后进入项目:

  1. cd Bert-vits2-V2.3

安装依赖:

  1. pip3 install -r requirements.txt

随后下载对应的模型,首先是bert模型:

  1. https://openi.pcl.ac.cn/Stardust_minus/Bert-VITS2/modelmanage/show_model

放入到bert目录:

  1. E:\work\Bert-VITS2-2.3\bert>tree /f
  2. Folder PATH listing for volume myssd
  3. Volume serial number is 7CE3-15AE
  4. E:.
  5. bert_models.json
  6. ├───bert-base-japanese-v3
  7. .gitattributes
  8. config.json
  9. README.md
  10. tokenizer_config.json
  11. vocab.txt
  12. ├───bert-large-japanese-v2
  13. .gitattributes
  14. config.json
  15. README.md
  16. tokenizer_config.json
  17. vocab.txt
  18. ├───chinese-roberta-wwm-ext-large
  19. .gitattributes
  20. added_tokens.json
  21. config.json
  22. pytorch_model.bin
  23. README.md
  24. special_tokens_map.json
  25. tokenizer.json
  26. tokenizer_config.json
  27. vocab.txt
  28. ├───deberta-v2-large-japanese
  29. .gitattributes
  30. config.json
  31. pytorch_model.bin
  32. README.md
  33. special_tokens_map.json
  34. tokenizer.json
  35. tokenizer_config.json
  36. ├───deberta-v2-large-japanese-char-wwm
  37. .gitattributes
  38. config.json
  39. pytorch_model.bin
  40. README.md
  41. special_tokens_map.json
  42. tokenizer_config.json
  43. vocab.txt
  44. └───deberta-v3-large
  45. .gitattributes
  46. config.json
  47. generator_config.json
  48. pytorch_model.bin
  49. README.md
  50. spm.model
  51. tokenizer_config.json

注意,其中每个子目录中的pytorch_model.bin就是bert模型本体。

随后还得下载clap模型,虽然推理已经把clap去掉了,同时下载wav2vec2-large-robust-12-ft-emotion-msp-dim模型,放入到项目的emotional目录:

  1. E:\work\Bert-VITS2-2.3\emotional>tree /f
  2. Folder PATH listing for volume myssd
  3. Volume serial number is 7CE3-15AE
  4. E:.
  5. ├───clap-htsat-fused
  6. .gitattributes
  7. config.json
  8. merges.txt
  9. preprocessor_config.json
  10. pytorch_model.bin
  11. README.md
  12. special_tokens_map.json
  13. tokenizer.json
  14. tokenizer_config.json
  15. vocab.json
  16. └───wav2vec2-large-robust-12-ft-emotion-msp-dim
  17. .gitattributes
  18. config.json
  19. LICENSE
  20. preprocessor_config.json
  21. pytorch_model.bin
  22. README.md
  23. vocab.json

最后下载底模:

  1. https://huggingface.co/OedoSoldier/Bert-VITS2-2.3

放入到角色的models目录即可。

请注意这次2.3的底模是4个文件。

Bert-vits2-2.3数据预处理

把艾达王的语音素材放入到Data/ada/raw目录中,执行切分脚本:

  1. python3 audio_slicer.py

会切分成小片素材:

  1. E:\work\Bert-VITS2-2.3\Data\ada\raw>tree /f
  2. Folder PATH listing for volume myssd
  3. Volume serial number is 7CE3-15AE
  4. E:.
  5. ada_0.wav
  6. ada_1.wav
  7. ada_10.wav
  8. ada_11.wav
  9. ada_12.wav
  10. ada_13.wav
  11. ada_14.wav
  12. ada_15.wav
  13. ada_16.wav
  14. ada_17.wav
  15. ada_18.wav
  16. ada_19.wav
  17. ada_2.wav
  18. ada_20.wav
  19. ada_21.wav
  20. ada_22.wav
  21. ada_23.wav
  22. ada_24.wav
  23. ada_25.wav
  24. ada_26.wav
  25. ada_3.wav
  26. ada_4.wav
  27. ada_5.wav
  28. ada_6.wav
  29. ada_7.wav
  30. ada_8.wav
  31. ada_9.wav

随后运行转写和标注:

  1. python3 short_audio_transcribe.py

程序返回:

  1. E:\work\Bert-VITS2-2.3\venv\lib\site-packages\whisper\timing.py:58: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  2. def backtrace(trace: np.ndarray):
  3. Data/ada/raw
  4. Detected language: en
  5. I do. The kind you like.
  6. Processed: 1/27
  7. Detected language: en
  8. Now where's the amber?
  9. Processed: 2/27
  10. Detected language: en
  11. Leave the girl. She's lost no matter what.
  12. Processed: 3/27
  13. Detected language: en
  14. You walk away now, and who knows?
  15. Processed: 4/27
  16. Detected language: en
  17. Maybe you'll live to meet me again.
  18. Processed: 5/27
  19. Detected language: en
  20. And I might get you that greeting you were looking for.
  21. Processed: 6/27
  22. Detected language: en
  23. How about we continue this discussion another time?
  24. Processed: 7/27
  25. Detected language: en
  26. Sorry, nothing yet.
  27. Processed: 8/27
  28. Detected language: en
  29. But my little helper is creating
  30. Processed: 9/27
  31. Detected language: en
  32. Quite the commotion.
  33. Processed: 10/27
  34. Detected language: en
  35. Everything will work out just fine.
  36. Processed: 11/27
  37. Detected language: en
  38. He's a good boy. Predictable.
  39. Processed: 12/27
  40. Detected language: en
  41. The deal was, we get you out of here when you deliver the amber. No amber, no protection, Louise.
  42. Processed: 13/27
  43. Detected language: en
  44. Nothing personal, Leon.
  45. Processed: 14/27
  46. Detected language: en
  47. Louise and I had an arrangement.
  48. Processed: 15/27
  49. Detected language: en
  50. Don't worry, I'll take good care of it.
  51. Processed: 16/27
  52. Detected language: en
  53. Just one question.
  54. Processed: 17/27
  55. Detected language: en
  56. What are you planning to do with this?
  57. Processed: 18/27
  58. Detected language: en
  59. So, we're talking millions of casualties?
  60. Processed: 19/27
  61. Detected language: en
  62. We're changing course. Now.
  63. Processed: 20/27
  64. Detected language: en
  65. You can stop right there, Leon.
  66. Processed: 21/27
  67. Detected language: en
  68. wouldn't make me use this.
  69. Processed: 22/27
  70. Detected language: en
  71. Would you? You don't seem surprised.
  72. Processed: 23/27
  73. Detected language: en
  74. Interesting.
  75. Processed: 24/27
  76. Detected language: en
  77. Not a bad move
  78. Processed: 25/27
  79. Detected language: en
  80. Very smooth. Ah, Leon.
  81. Processed: 26/27
  82. Detected language: en
  83. You know I don't work and tell.

注意,这里whiper会报一个警告,如果觉得不好看,可以修改timing.py第58行:

  1. 修改前
  2. @numba.jit
  3. def backtrace(trace: np.ndarray):
  4. 修改后
  5. @numba.jit(nopython=True)
  6. def backtrace(trace: np.ndarray):

随后,运行web预处理界面:

  1. python3 webui_preprocess.py

随后按照页面提示操作即可:

至此,数据预处理就结束了。

Bert-vits2-2.3训练和推理

在根目录运行命令:

  1. python3 train_ms.py

模型会在models目录生成:

  1. E:\work\Bert-VITS2-2.3\Data\ada\models>tree/f
  2. Folder PATH listing for volume myssd
  3. Volume serial number is 7CE3-15AE
  4. E:.
  5. G_150.pth

随后开启推理页面进行推理即可:

  1. python3 webui.py

新的推理页面增加了使用辅助文本的语意来辅助生成对话(语言保持与主文本相同),即以提示词prompt的形式来定制化生成语音的风格。

但又不能使用使用指令式文本(如:开心),要使用带有强烈情感的文本(如:我好快乐!!!)

这就导致生成的语音情感风格比较玄学:

因为你得不停地调整prompt来测试效果,不如之前地clap情感的audio prompt来的直观,但客观上讲,通过bert语义文本引导的风格化情感语音还是有一定效果的。

结语

更新Bert-vits2基础教程的同时,也学习到了很多东西,毫无疑问,Bert-vits2让更多的人领略到了深度学习的魅力,它是一个极其优秀的人工智能入门项目,兴趣永远是最好的老师,与各位共勉,最后奉上Bert-vits2-2.3-Final整合包:

  1. 整合包链接:https://pan.baidu.com/s/182LZCu5cyR3nH8EoTBLR-g?pwd=v3uc

与众乡亲同飨。

原文链接:https://www.cnblogs.com/v3ucn/p/17921718.html

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号