TIGER-AI-Lab/AnyV2V

Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]

42
/ 100
Emerging

Leverages image-to-video (I2V) diffusion models to reduce video editing to single-frame image editing, enabling diverse editing tasks (stylization, object manipulation, semantic changes) through plug-and-play integration with any image editing method. Uses latent space DDIM inversion and PnP guidance to propagate first-frame edits temporally while maintaining appearance and motion consistency across frames. Supports multiple I2V backbones (i2vgen-xl, ConsistI2V, SEINE) with modular architecture compatible with InstantStyle, InstructPix2Pix, and other image editors.

649 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

649

Forks

49

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/TIGER-AI-Lab/AnyV2V"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.