kokoro-onnx and StreamingKokoroJS

These are ecosystem siblings: the ONNX runtime implementation provides the core inference engine while the browser-based streaming variant adapts that same Kokoro model for client-side web deployment with different optimization constraints.

kokoro-onnx
77
Verified
StreamingKokoroJS
43
Emerging
Maintenance 10/25
Adoption 22/25
Maturity 25/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 15/25
Community 16/25
Stars: 2,419
Forks: 252
Downloads: 169,357
Commits (30d): 0
Language: Python
License: MIT
Stars: 330
Forks: 33
Downloads:
Commits (30d): 0
Language: JavaScript
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About kokoro-onnx

thewh1teagle/kokoro-onnx

TTS with kokoro and onnx runtime

Leverages ONNX Runtime for CPU and GPU-accelerated inference with quantized models as small as 80MB, enabling near real-time synthesis on resource-constrained devices like M1 Macs. Supports 82+ voices across multiple languages with optional grapheme-to-phoneme conversion via the misaki package for improved pronunciation accuracy. Provides a lightweight, self-contained alternative to larger TTS systems while maintaining compatibility with standard audio output formats.

About StreamingKokoroJS

rhulha/StreamingKokoroJS

Unlimited text-to-speech in the Browser using Kokoro-JS, 100% local, 100% open source

Leverages the Kokoro-82M-v1.0-ONNX model (~300MB) with WebGPU acceleration and WASM fallback for hardware-adaptive processing, using Web Workers to prevent UI blocking during generation. Implements intelligent text chunking to stream audio chunks as they're generated, maintaining natural speech patterns across multiple voice styles at 24kHz sample rate. Supports local model loading for offline deployment while maintaining full privacy through 100% client-side inference.

Scores updated daily from GitHub, PyPI, and npm data. How scores work