D-Ogi/ComfyUI-Attention-Optimizer
Automatically benchmark and optimize attention in diffusion models. 1.5-2x speedup on RTX 4090.
Implements automatic benchmarking of multiple attention backends (PyTorch SDPA, Flash Attention, SageAttention, xFormers) within ComfyUI, detecting optimal implementations per GPU and model configuration, then applies the fastest via model patching. Caches benchmark results by model hash to eliminate repeated overhead, with support for diverse architectures (SDXL, Flux, video models) and head dimensions. Provides granular backend selection with per-GPU recommendations and detailed performance reporting.
Stars
27
Forks
5
Language
Python
License
MIT
Category
Last pushed
Feb 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/D-Ogi/ComfyUI-Attention-Optimizer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Comfy-Org/comfy-cli
Command Line Interface for Managing ComfyUI
runpod-workers/worker-comfyui
ComfyUI as a serverless API on RunPod
YanWenKun/ComfyUI-Docker
🐳Dockerfile for 🎨ComfyUI. | 容器镜像与启动脚本
Comfy-Org/ComfyUI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Acly/comfyui-tooling-nodes
Nodes for using ComfyUI as a backend for external tools. Send and receive images directly...