worker-comfyui and comfyui
About worker-comfyui
runpod-workers/worker-comfyui
ComfyUI as a serverless API on RunPod
Exposes ComfyUI workflows through RunPod's serverless API (`/run`, `/runsync`, `/status`) with flexible output handling—images returned as base64 strings by default or uploaded directly to S3 buckets. Pre-built Docker images bundle popular diffusion models (FLUX.1, Stable Diffusion 3/XL) alongside the base ComfyUI installation, eliminating manual model setup. Accepts workflow JSON with optional base64-encoded input images and supports per-request API credentials for external integrations.
About comfyui
ai-dock/comfyui
ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
Supports multi-GPU architecture variants (CUDA, ROCm, CPU) with configurable startup arguments and automatic model downloads from HuggingFace/Civitai using token authentication. Includes a REST API wrapper with Swagger documentation alongside the core ComfyUI service, both password-protected and manageable via supervisorctl. Provisioning scripts enable reproducible setups for specific model configurations like SD3 and FLUX.1 without bundling models directly in the image.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work