River-Zhang/ICEdit

[NeurIPS 2025] Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Surpasses GPT-4o in ID persistence~ MoE ckpt released! Only 4GB VRAM is enough to run!

48
/ 100
Emerging

Leverages in-context generation within large-scale diffusion transformers to enable instruction-based image editing through lightweight LoRA adapters, requiring only 0.5% of prior SOTA training data. Supports multi-turn sequential edits and integrates with ComfyUI workflows, Gradio demos, and Hugging Face Spaces, with optimized variants including MoE-LoRA and GGUF quantization for resource-constrained inference (4-10GB VRAM).

2,083 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 17 / 25

How are scores calculated?

Stars

2,083

Forks

114

Language

Python

License

Last pushed

Dec 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/River-Zhang/ICEdit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.