ORI-Muchim/Efficient-Speech
Lightweight Korean TTS Model based on FastSpeech2
Built on a shallow 2-block pyramid transformer architecture with depth-wise separable convolutions, the model achieves real-time synthesis (104x speedup on RPi4) with only 266k parameters—roughly 1% the size of comparable systems. Supports PyTorch 2.0 and Lightning training with mixed-precision options, plus ONNX export for edge deployment with fixed input phoneme length, and integrates Korean phoneme processing via G2P with FastSpeech2-style mel-spectrogram generation requiring just 90 MFLOPS for 6-second utterances.
Stars
14
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/ORI-Muchim/Efficient-Speech"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TensorSpeech/TensorFlowTTS
:stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for...
lucasnewman/nanospeech
A simple, hackable text-to-speech system in PyTorch and MLX
Tomiinek/Multilingual_Text_to_Speech
An implementation of Tacotron 2 that supports multilingual experiments with parameter-sharing,...
jxzhanggg/nonparaSeq2seqVC_code
Implementation code of non-parallel sequence-to-sequence VC
keonlee9420/STYLER
Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech...