ComfyUI and ComfyUI-RMBG
About ComfyUI
Comfy-Org/ComfyUI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Supports diverse generative models across image, video, audio, and 3D modalities with intelligent memory management and GPU offloading for low-VRAM systems. The architecture uses an asynchronous queue system with incremental execution—only re-computing workflow nodes that have changed—and integrates LoRAs, ControlNets, and model merging capabilities. Extensible through custom nodes and external API providers, while maintaining fully offline operation for core functionality.
About ComfyUI-RMBG
1038lab/ComfyUI-RMBG
A ComfyUI custom node designed for advanced image background removal and object, face, clothes, and fashion segmentation, utilizing multiple models including RMBG-2.0, INSPYRENET, BEN, BEN2, BiRefNet, SDMatte, SAM, SAM2, SAM3 and GroundingDINO.
Provides real-time background replacement, enhanced edge detection, and advanced matting capabilities through SAM2/SAM3 text-prompted segmentation and SDMatte alpha channel refinement. The node ecosystem includes utility components like mask enhancement, image stitching, object removal, and latent-space conditioning for seamless integration into ComfyUI image generation workflows. Multi-model architecture enables selective deployment—choosing between speed-optimized variants (BiRefNet_lite) or specialized outputs (toon rendering, dynamic matting) depending on use case requirements.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work