TTS-Audio-Suite and ComfyUI-VibeVoice

These are **competitors**: both provide TTS capabilities for ComfyUI, with TTS-Audio-Suite offering broader multi-engine support (RVC, Echo-TTS, Qwen3-TTS, etc.) while VibeVoice specializes in expressive long-form conversational audio, requiring users to select one based on their specific TTS requirements.

TTS-Audio-Suite
68
Established
ComfyUI-VibeVoice
50
Established
Maintenance 25/25
Adoption 10/25
Maturity 15/25
Community 18/25
Maintenance 2/25
Adoption 10/25
Maturity 15/25
Community 23/25
Stars: 774
Forks: 71
Downloads:
Commits (30d): 55
Language: Python
License:
Stars: 563
Forks: 105
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About TTS-Audio-Suite

diodiogod/TTS-Audio-Suite

A ComfyUI custom node integration for multi-engine multi-language Text-to-Speech and Voice Conversion. Supports: RVC, Echo-TTS, Qwen3-TTS, Cozy Voice 3, Step Audio EditX, IndexTTS-2, Chatterbox (classic and multilingual 23-lang), F5-TTS, Higgs Audio 2 and VibeVoice with unlimited text length, SRT timing, Character support, and many audio tools

Implements a modular node-based architecture within ComfyUI that abstracts 12 TTS/voice conversion engines behind unified interfaces, enabling workflows to swap engines or chain operations (transcription → subtitle timing → synthesis → voice conversion) without graph restructuring. Provides advanced subtitle authoring through SRT generation from plain text using readability algorithms, per-segment parameter switching via inline tags like `[seed:24]` or ``, and character/language switching within single text blocks—bridging traditional NLP workflows with real-time audio generation at unlimited text lengths.

About ComfyUI-VibeVoice

wildminder/ComfyUI-VibeVoice

ComfyUI custom node for the VibeVoice TTS. Expressive, long-form, multi-speaker conversational audio

Integrates Microsoft's VibeVoice model directly into ComfyUI workflows for multi-speaker dialogue generation, supporting voice cloning via reference audio and hybrid zero-shot voice generation. Features 4-bit LLM quantization, multiple attention backends (eager/SDPA/Flash Attention/SageAttention), and automatic model management with configurable diffusion parameters for fine-grained control over speech synthesis.

Scores updated daily from GitHub, PyPI, and npm data. How scores work