JIA-Lab-research/DreamOmni2

This project is the official implementation of 'DreamOmni2: Multimodal Instruction-based Editing and Generation''

50
/ 100
Established

Leverages a unified diffusion-based architecture with separate LoRA modules for editing and generation tasks, using multimodal encoders to process both text instructions and reference images for concrete object or abstract attribute guidance. Supports both subject-driven generation with identity/pose consistency and inpainting-aware editing that preserves non-edited regions while accepting visual references alongside natural language prompts. Available on Hugging Face with web demo interfaces and integrated with ComfyUI for production workflows.

2,273 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 19 / 25

How are scores calculated?

Stars

2,273

Forks

191

Language

Python

License

Apache-2.0

Last pushed

Oct 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/JIA-Lab-research/DreamOmni2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.