GSV-TTS-Lite and Genie-TTS

These are complementary tools serving different inference needs for GPT-SoVITS: GSV-TTS-Lite provides a lightweight Python inference engine optimized for real-time performance, while Genie-TTS focuses on ONNX model conversion and cross-platform inference compatibility, allowing users to choose between native and standardized deployment formats.

GSV-TTS-Lite
59
Established
Genie-TTS
55
Established
Maintenance 13/25
Adoption 15/25
Maturity 20/25
Community 11/25
Maintenance 13/25
Adoption 10/25
Maturity 15/25
Community 17/25
Stars: 57
Forks: 6
Downloads: 1,459
Commits (30d): 0
Language: Python
License: MIT
Stars: 1,433
Forks: 95
Downloads:
Commits (30d): 1
Language: Python
License: MIT
No risk flags
No Package No Dependents

About GSV-TTS-Lite

chinokikiss/GSV-TTS-Lite

GSV-TTS-Lite A high-performance inference engine specifically designed for the GPT-SoVITS text-to-speech model.(few shot voice cloning)

Implements millisecond-level latency through deep optimization techniques including Flash Attention support and decoupled timbre-emotion control, achieving 3-4x speedup on consumer GPUs while halving VRAM requirements. Provides multiple inference modes (streaming token-level output, batch processing, voice conversion) with subtitle timestamp alignment, and ships as a PyPI package supporting CUDA, MPS (Apple Silicon), and CPU backends via Python SDK, REST API, and WebUI interfaces.

About Genie-TTS

High-Logic/Genie-TTS

GPT-SoVITS ONNX Inference Engine & Model Converter

Converts PyTorch GPT-SoVITS models to optimized ONNX format for CPU-first inference with ~1.1s first-token latency and minimal runtime footprint (~200MB). Provides Python API, FastAPI server integration, and pre-trained character models across Japanese, English, Chinese, and Korean with emotion/intonation cloning via reference audio.

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work