MOSS-TTS and MOSS-Speech

These are complementary components of an end-to-end speech processing pipeline: MOSS-Speech handles speech-to-speech understanding and generation, while MOSS-TTS provides the specialized text-to-speech synthesis layer needed to produce high-fidelity audio output.

MOSS-TTS
61
Established
MOSS-Speech
44
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 11/25
Community 17/25
Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 9/25
Stars: 922
Forks: 82
Downloads:
Commits (30d): 30
Language: Python
License: Apache-2.0
Stars: 127
Forks: 7
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About MOSS-TTS

OpenMOSS/MOSS-TTS

MOSS‑TTS Family is an open‑source speech and sound generation model family from MOSI.AI and the OpenMOSS team. It is designed for high‑fidelity, high‑expressiveness, and complex real‑world scenarios, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.

# Technical Summary Built on a modular architecture, MOSS-TTS decomposes speech synthesis into five specialized models—flagship TTS for zero-shot voice cloning with phoneme-level control, a dialogue model outperforming closed-source baselines on objective metrics, a prompt-based voice generator requiring no reference audio, a low-latency realtime agent model (180ms TTFB), and a sound effect generator. The framework supports multiple inference backends including PyTorch-free deployment via llama.cpp with GGUF quantization and ONNX audio codec decoding, plus SGLang acceleration achieving 3× faster generation throughput. Models are available on Hugging Face and ModelScope with fine-tuning tutorials and REST API documentation via the MOSI.AI studio platform.

About MOSS-Speech

OpenMOSS/MOSS-Speech

MOSS-Speech is a true speech-to-speech large language model without text guidance.

Scores updated daily from GitHub, PyPI, and npm data. How scores work