kan-bayashi/ParallelWaveGAN

Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch

51
/ 100
Established

Implements five non-autoregressive vocoder architectures (Parallel WaveGAN, MelGAN, Multiband-MelGAN, HiFi-GAN, StyleMelGAN) optimized for real-time mel-spectrogram-to-waveform synthesis with modular generator-discriminator combinations and STFT loss functions. Designed for seamless integration with ESPnet-TTS and Tacotron2-based systems, supporting both text-to-speech and singing voice synthesis through pre-built recipes for diverse datasets (LJSpeech, VCTK, LibriTTS, Kiritan, Opencpop, etc.). Provides distributed multi-GPU training via PyTorch with optional NVIDIA Apex support and includes inference utilities for deployment.

1,637 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,637

Forks

352

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/kan-bayashi/ParallelWaveGAN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.