River-Zhang/ICEdit
[NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ MoE ckpt released! Only 4GB VRAM is enough to run!
Leverages in-context generation within large-scale diffusion transformers to enable instruction-based image editing through lightweight LoRA adapters, requiring only 0.5% of prior SOTA training data. Supports multi-turn sequential edits and integrates with ComfyUI workflows, Gradio demos, and Hugging Face Spaces, with optimized variants including MoE-LoRA and GGUF quantization for resource-constrained inference (4-10GB VRAM).
2,083 stars.
Stars
2,083
Forks
114
Language
Python
License
—
Category
Last pushed
Dec 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/River-Zhang/ICEdit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
bytedance/InfiniteYou
🔥 [ICCV 2025 Highlight] InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity
muzishen/IMAGHarmony
🧩 IMAGHarmony 🧩: Controllable image editing with consistent object quantity and layout. A...
AMAP-ML/FE2E
[CVPR 2026] Beyond Generation: Advancing Image Editing Priors for Depth and Normal Estimation
TencentARC/BrushNet
[ECCV 2024] The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting...
ermongroup/SDEdit
PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations