kokoro-onnx and Kokoros
These are complementary implementations targeting different runtime environments—ONNX Runtime for Python-based inference versus Rust for standalone or embedded deployment—allowing developers to choose the optimal backend for their use case.
About kokoro-onnx
thewh1teagle/kokoro-onnx
TTS with kokoro and onnx runtime
Leverages ONNX Runtime for CPU and GPU-accelerated inference with quantized models as small as 80MB, enabling near real-time synthesis on resource-constrained devices like M1 Macs. Supports 82+ voices across multiple languages with optional grapheme-to-phoneme conversion via the misaki package for improved pronunciation accuracy. Provides a lightweight, self-contained alternative to larger TTS systems while maintaining compatibility with standard audio output formats.
About Kokoros
lucasjinreal/Kokoros
🔥🔥 Kokoro in Rust. https://huggingface.co/hexgrad/Kokoro-82M Insanely fast, realtime TTS with high quality you ever have.
Provides built-in phonemization and ONNX model inference without external dependencies, enabling end-to-end TTS in pure Rust. Supports style mixing, word-level timestamps, streaming output, and an OpenAI-compatible HTTP API with configurable parallel processing for both low-latency and high-throughput scenarios.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work