ComfyUI-VoxCPM and ComfyUI-KaniTTS

These two tools are **competitors**, as both provide ComfyUI nodes for text-to-speech generation with varying focuses on expressiveness, zero-shot voice cloning, and modularity.

ComfyUI-VoxCPM
47
Emerging
ComfyUI-KaniTTS
36
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 15/25
Community 16/25
Maintenance 6/25
Adoption 7/25
Maturity 15/25
Community 8/25
Stars: 390
Forks: 42
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 38
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About ComfyUI-VoxCPM

wildminder/ComfyUI-VoxCPM

ComfyUI node for highly expressive speech and realistic zero-shot voice cloning

Implements a tokenizer-free diffusion-based TTS architecture built on MiniCPM-4 that models speech in continuous space rather than discrete tokens, enabling context-aware prosody generation. Includes native LoRA fine-tuning support within ComfyUI for custom voice style training, automatic model management with efficient VRAM offloading, and operates at 6.25Hz token rate for faster synthesis on consumer hardware. Integrates seamlessly with ComfyUI's node workflow system, supporting optional reference audio for voice cloning and compatible with multiple inference backends (CUDA, CPU, MPS, DirectML).

About ComfyUI-KaniTTS

wildminder/ComfyUI-KaniTTS

ComfyUI node for modular, human‑like Kani TTS. Generate natural, high‑quality speech from text

Scores updated daily from GitHub, PyPI, and npm data. How scores work