pollockjj/ComfyUI-MultiGPU

This custom_node for ComfyUI adds one-click "Virtual VRAM" for any UNet and CLIP loader as well MultiGPU integration in WanVideoWrapper, managing the offload/Block Swap of layers to DRAM *or* VRAM to maximize the latent space of your card. Also includes nodes for directly loading entire components (UNet, CLIP, VAE) onto the device you choose

62
/ 100
Established

DisTorch2 uses a layer-distribution architecture to split model weights across CPU DRAM and multiple GPUs via byte-precise or ratio-based allocation, enabling users to offload static model components while reserving maximum VRAM for latent space computation. It supports universal `.safetensors` and GGUF model formats with three allocation modes: simple virtual VRAM sliders, expert byte/ratio specifications (e.g., `cuda:0,2.5gb;cpu,*`), and fraction-based distribution—compatible with all ComfyUI checkpoint, CLIP, VAE, ControlNet, and video generation loaders including WanVideoWrapper.

823 stars. Actively maintained with 14 commits in the last 30 days.

No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

823

Forks

62

Language

Python

License

GPL-3.0

Last pushed

Mar 17, 2026

Commits (30d)

14

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/pollockjj/ComfyUI-MultiGPU"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.