kokoro-onnx and expo-kokoro-onnx
These are ecosystem siblings where one provides a general-purpose ONNX Runtime wrapper for Kokoro TTS, while the other specializes it for mobile/React Native deployment via Expo.
About kokoro-onnx
thewh1teagle/kokoro-onnx
TTS with kokoro and onnx runtime
Leverages ONNX Runtime for CPU and GPU-accelerated inference with quantized models as small as 80MB, enabling near real-time synthesis on resource-constrained devices like M1 Macs. Supports 82+ voices across multiple languages with optional grapheme-to-phoneme conversion via the misaki package for improved pronunciation accuracy. Provides a lightweight, self-contained alternative to larger TTS systems while maintaining compatibility with standard audio output formats.
About expo-kokoro-onnx
isaiahbjork/expo-kokoro-onnx
Run Kokoro TTS locally on device using Expo & ONNX Runtime
Leverages ONNX Runtime for efficient neural inference on mobile, converting Kokoro's TTS model to run without cloud dependency. Implements a complete pipeline from text normalization through phonemization and tokenization to audio waveform generation, with support for multiple quantized model variants (ranging from 326MB full precision to 92.4MB quantized) and accent-specific voice packs. Integrates with Expo's ecosystem (AV for playback, FileSystem for model management) and React Native for cross-platform iOS/Android deployment.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work