VibeVoice-ComfyUI and ComfyUI-GPT_SoVITS
These are competitors—both provide text-to-speech synthesis capabilities within ComfyUI, but VibeVoice offers Microsoft's multi-speaker model while GPT-SoVITS emphasizes voice cloning, so users would typically choose one based on whether they prioritize multi-speaker synthesis or speaker adaptation.
About VibeVoice-ComfyUI
Enemyx-net/VibeVoice-ComfyUI
A comprehensive ComfyUI integration for Microsoft's VibeVoice text-to-speech model, enabling high-quality single and multi-speaker voice synthesis directly within your ComfyUI workflows.
Supports voice cloning from audio samples, LoRA fine-tuning adapters, and multi-speaker conversations with up to 4 distinct voices using speaker labels. The implementation features embedded VibeVoice code with adaptive transformer compatibility, configurable quantization (4-bit/8-bit) for VRAM optimization, and cross-platform GPU support including Apple Silicon via MPS. Operates as a self-contained ComfyUI custom node with automatic text chunking, pause tag insertion, and memory management controls for complex generative workflows.
About ComfyUI-GPT_SoVITS
AIFSH/ComfyUI-GPT_SoVITS
a comfyui custom node for GPT-SoVITS! you can voice cloning and tts in comfyui now
Integrates GPT-SoVITS voice synthesis into ComfyUI's node-based workflow, supporting multi-speaker inference and fine-tuning via SRT subtitle files for precise speaker control. Automatically downloads pre-trained models from Hugging Face, with ffmpeg as the only external dependency. Enables seamless composition with other ComfyUI nodes for end-to-end audio generation pipelines.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work