TTS-Audio-Suite and ComfyUI-VibeVoice
These are **competitors**: both provide TTS capabilities for ComfyUI, with TTS-Audio-Suite offering broader multi-engine support (RVC, Echo-TTS, Qwen3-TTS, etc.) while VibeVoice specializes in expressive long-form conversational audio, requiring users to select one based on their specific TTS requirements.
About TTS-Audio-Suite
diodiogod/TTS-Audio-Suite
A ComfyUI custom node integration for multi-engine multi-language Text-to-Speech and Voice Conversion. Supports: RVC, Echo-TTS, Qwen3-TTS, Cozy Voice 3, Step Audio EditX, IndexTTS-2, Chatterbox (classic and multilingual 23-lang), F5-TTS, Higgs Audio 2 and VibeVoice with unlimited text length, SRT timing, Character support, and many audio tools
Implements a modular node-based architecture within ComfyUI that abstracts 12 TTS/voice conversion engines behind unified interfaces, enabling workflows to swap engines or chain operations (transcription → subtitle timing → synthesis → voice conversion) without graph restructuring. Provides advanced subtitle authoring through SRT generation from plain text using readability algorithms, per-segment parameter switching via inline tags like `[seed:24]` or `
About ComfyUI-VibeVoice
wildminder/ComfyUI-VibeVoice
ComfyUI custom node for the VibeVoice TTS. Expressive, long-form, multi-speaker conversational audio
Integrates Microsoft's VibeVoice model directly into ComfyUI workflows for multi-speaker dialogue generation, supporting voice cloning via reference audio and hybrid zero-shot voice generation. Features 4-bit LLM quantization, multiple attention backends (eager/SDPA/Flash Attention/SageAttention), and automatic model management with configurable diffusion parameters for fine-grained control over speech synthesis.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work