coqui-ai/TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

69
/ 100
Established

Supports multiple model architectures spanning spectrogram-based (Tacotron2, Glow-TTS, FastSpeech2) and end-to-end approaches (VITS, XTTS), with built-in speaker encoder for multi-speaker synthesis and voice cloning. Enables sub-200ms streaming inference, fine-tuning on custom datasets, and integrates ~1100 Fairseq models alongside modular vocoder support (MelGAN, ParallelWaveGAN, WaveGrad). Training infrastructure includes dataset curation tools, Tensorboard logging, and a lightweight Trainer API optimized for efficient multi-GPU training.

44,801 stars and 214,937 monthly downloads. Used by 2 other packages. No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 22 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

44,801

Forks

5,999

Language

Python

License

MPL-2.0

Last pushed

Aug 16, 2024

Monthly downloads

214,937

Commits (30d)

0

Dependencies

39

Reverse dependents

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/coqui-ai/TTS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.