pollockjj/ComfyUI-MultiGPU
This custom_node for ComfyUI adds one-click "Virtual VRAM" for any UNet and CLIP loader as well MultiGPU integration in WanVideoWrapper, managing the offload/Block Swap of layers to DRAM *or* VRAM to maximize the latent space of your card. Also includes nodes for directly loading entire components (UNet, CLIP, VAE) onto the device you choose
DisTorch2 uses a layer-distribution architecture to split model weights across CPU DRAM and multiple GPUs via byte-precise or ratio-based allocation, enabling users to offload static model components while reserving maximum VRAM for latent space computation. It supports universal `.safetensors` and GGUF model formats with three allocation modes: simple virtual VRAM sliders, expert byte/ratio specifications (e.g., `cuda:0,2.5gb;cpu,*`), and fraction-based distribution—compatible with all ComfyUI checkpoint, CLIP, VAE, ControlNet, and video generation loaders including WanVideoWrapper.
823 stars. Actively maintained with 14 commits in the last 30 days.
Stars
823
Forks
62
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 17, 2026
Commits (30d)
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/pollockjj/ComfyUI-MultiGPU"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
Comfy-Org/comfy-cli
Command Line Interface for Managing ComfyUI
runpod-workers/worker-comfyui
ComfyUI as a serverless API on RunPod
YanWenKun/ComfyUI-Docker
🐳Dockerfile for 🎨ComfyUI. | 容器镜像与启动脚本
Comfy-Org/ComfyUI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Acly/comfyui-tooling-nodes
Nodes for using ComfyUI as a backend for external tools. Send and receive images directly...