Go to file
2021-09-27 21:45:34 +08:00
.vscode Web server (#94) 2021-09-21 16:56:12 +08:00
archived_untest_files Support training your own vocoder 2021-08-29 15:43:54 +08:00
encoder Built-in pretrained encoder/vocoder model 简化配置流程,预集成模型 2021-08-16 22:18:46 +08:00
synthesizer Web server release v2 (#99) 2021-09-25 17:07:46 +08:00
toolbox Support train hifigan (#83) 2021-09-14 13:31:53 +08:00
utils Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
vocoder Support tensorboard to trace the training of Synthesizer (#98) 2021-09-25 17:06:51 +08:00
web Change synthesizer read to restful style 2021-09-26 10:01:50 +08:00
.gitattributes Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
.gitignore Support train hifigan (#83) 2021-09-14 13:31:53 +08:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2021-08-18 16:37:25 +08:00
demo_toolbox.py Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
LICENSE.txt Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
pre.py [Fix bug] remove n processes of embed in pre.py 2021-09-24 20:52:33 +08:00
README-CN.md add new model 2021-09-27 21:45:34 +08:00
README.md add new model 2021-09-27 21:45:34 +08:00
requirements.txt Web server release v2 (#99) 2021-09-25 17:07:46 +08:00
synthesizer_preprocess_audio.py remove unused sample audios and mark deprecated files 2021-09-11 22:59:09 +08:00
synthesizer_preprocess_embeds.py remove unused sample audios and mark deprecated files 2021-09-11 22:59:09 +08:00
synthesizer_train.py Support tensorboard to trace the training of Synthesizer (#98) 2021-09-25 17:06:51 +08:00
vocoder_preprocess.py Web server: Add latest changes (#96) 2021-09-24 09:47:51 +08:00
vocoder_train.py Support train hifigan (#83) 2021-09-14 13:31:53 +08:00
web.py Web server (#94) 2021-09-21 16:56:12 +08:00

mockingbird

MIT License

English | 中文

Features

🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, and etc.

🤩 PyTorch worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060

🌍 Windows + Linux run in both Windows OS and linux OS (even in M1 MACOS)

🤩 Easy & Awesome effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder

🌍 Webserver Ready to serve your result with remote calling

DEMO VIDEO

Quick Start

1. Install Requirements

Follow the original repo to test if you got all environment ready. **Python 3.7 or higher ** is needed to run the toolbox.

If you get an ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2 ) This error is probably due to a low version of python, try using 3.9 and it will install successfully

  • Install ffmpeg.
  • Run pip install -r requirements.txt to install the remaining necessary packages.
  • Install webrtcvad pip install webrtcvad-wheels(If you need)

Note that we are using the pretrained encoder/vocoder but synthesizer, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment.

2. Prepare your models

You can either train your models or use existing ones:

2.1. Train synthesizer with your dataset

  • Download dataset and unzip: make sure you can access all .wav in folder

  • Preprocess with the audios and the mel spectrograms: python pre.py <datasets_root> Allowing parameter --dataset {dataset} to support aidatatang_200zh, magicdata, aishell3, etc.

  • Train the synthesizer: python synthesizer_train.py mandarin <datasets_root>/SV2TTS/synthesizer

  • Go to next step when you see attention line show and loss meet your need in training folder synthesizer/saved_models/.

2.2 Use pretrained model of synthesizer

Thanks to the community, some models will be shared:

author Download link Preview Video Info
@myself https://pan.baidu.com/s/1VHSKIbxXQejtxi2at9IrpA Baidu codei183 200k steps only trained by aidatatang_200zh
@FawenYo https://drive.google.com/file/d/1H-YGOUHpmqKxJ9FRc6vAjPuqQki24UbC/view?usp=sharing Baidu Pan Code1024 input output 200k steps with local accent of Taiwan
@miven https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ code2021 https://www.bilibili.com/video/BV1uh411B7AD/

2.3 Train vocoder (Optional)

note: vocoder has little difference in effect, so you may not need to train a new one.

  • Preprocess the data: python vocoder_preprocess.py <datasets_root> -m <synthesizer_model_path>

<datasets_root> replace with your dataset root<synthesizer_model_path>replace with directory of your best trained models of sythensizer, e.g. sythensizer\saved_mode\xxx

  • Train the wavernn vocoder: python vocoder_train.py mandarin <datasets_root>

  • Train the hifigan vocoder python vocoder_train.py mandarin <datasets_root> hifigan

3. Launch

3.1 Using the web server

You can then try to run:python web.py and open it in browser, default as http://localhost:8080

3.2 Using the Toolbox

You can then try the toolbox: python demo_toolbox.py -d <datasets_root>

Reference

This repository is forked from Real-Time-Voice-Cloning which only support English.

URL Designation Title Implementation source
2010.05646 HiFi-GAN (vocoder) Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis This repo
1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo
1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN
1703.10135 Tacotron (synthesizer) Tacotron: Towards End-to-End Speech Synthesis fatchord/WaveRNN
1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo

F Q&A

1.Where can I download the dataset?

aidatatang_200zhmagicdataaishell3

After unzip aidatatang_200zh, you need to unzip all the files under aidatatang_200zh\corpus\train

2.What is<datasets_root>?

If the dataset path is D:\data\aidatatang_200zh,then <datasets_root> isD:\data

3.Not enough VRAM

Train the synthesizeradjust the batch_size in synthesizer/hparams.py

//Before
tts_schedule = [(2,  1e-3,  20_000,  12),   # Progressive training schedule
                (2,  5e-4,  40_000,  12),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  12),   #
                (2,  1e-4, 160_000,  12),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  12),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  12)],  # lr = learning rate
//After
tts_schedule = [(2,  1e-3,  20_000,  8),   # Progressive training schedule
                (2,  5e-4,  40_000,  8),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  8),   #
                (2,  1e-4, 160_000,  8),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  8),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  8)],  # lr = learning rate

Train Vocoder-Preprocess the dataadjust the batch_size in synthesizer/hparams.py

//Before
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 16,                  # For vocoder preprocessing and inference.
//After
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 8,                  # For vocoder preprocessing and inference.

Train Vocoder-Train the vocoderadjust the batch_size in vocoder/wavernn/hparams.py

//Before
# Training
voc_batch_size = 100
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad = 2

//After
# Training
voc_batch_size = 6
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad =2

4.If it happens RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]).

Please refer to issue #37

5. How to improve CPU and GPU occupancy rate?

Adjust the batch_size as appropriate to improve

6. What if it happens the page file is too small to complete the operation

Please refer to this video and change the virtual memory to 100G (102400), for example : When the file is placed in the D disk, the virtual memory of the D disk is changed.

7. When should I stop during training?

FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps. attention_step_20500_sample_1 step-135500-mel-spectrogram_sample_1