lixiaowen-xw/DiffuEraser
DiffuEraser is a diffusion model for video inpainting, which performs great content completeness and temporal consistency while maintaining acceptable efficiency.
Builds on Stable Diffusion v1.5 with a dual-branch architecture combining a denoising UNet and auxiliary BrushNet module, integrating temporal attention mechanisms and prior conditioning to enhance temporal consistency across video frames. Leverages temporal receptive field expansion and video diffusion smoothing for long-sequence inference, with post-processing mask blending to refine inpainted regions. Available on Hugging Face and ModelScope with support for multi-resolution inference (360p–720p) and two-stage training pipeline.
625 stars. No commits in the last 6 months.
Stars
625
Forks
59
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/lixiaowen-xw/DiffuEraser"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jolibrain/joliGEN
Generative AI Image and Video Toolset with GANs and Diffusion for Real-World Applications
zhangmozhe/Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
naver-ai/StyleKeeper
Official Pytorch implementation of "StyleKeeper: Prevent Content Leakage using Negative Visual...
ali-vilab/AnyDoor
Official implementations for paper: Anydoor: zero-shot object-level image customization
ironjr/semantic-draw
Official code for the CVPR 2025 paper "SemanticDraw: Towards Real-Time Interactive Content...