jaywalnut310/vits
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Combines normalizing flows with adversarial training to enable parallel, single-stage synthesis that matches two-stage TTS quality while modeling natural speech variation through a stochastic duration predictor. Implements monotonic alignment search (Cython-optimized) for unsupervised duration learning and supports both single-speaker (LJ Speech) and multi-speaker (VCTK) training pipelines with PyTorch, requiring phoneme preprocessing via g2p.
7,837 stars. No commits in the last 6 months.
Stars
7,837
Forks
1,386
Language
Python
License
MIT
Category
Last pushed
Dec 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/jaywalnut310/vits"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
yeyupiaoling/MASR
Pytorch实现的流式与非流式的自动语音识别框架,同时兼容在线和离线识别,目前支持Conformer、Squeezeformer、DeepSpeech2模型,支持多种数据增强方法。
coqui-ai/TTS
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
shivammehta25/Matcha-TTS
[ICASSP 2024] 🍵 Matcha-TTS: A fast TTS architecture with conditional flow matching
netease-youdao/EmotiVoice
EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
gabrielmittag/NISQA
NISQA - Non-Intrusive Speech Quality and TTS Naturalness Assessment