Advocate99/DiffGesture
[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
Employs a Diffusion Audio-Gesture Transformer architecture to jointly model cross-modal audio-to-skeleton associations while preserving temporal coherence through an annealed noise sampling strategy. Integrates classifier-free guidance for diversity-quality trade-offs and uses pretrained autoencoders (from HA2G) for perceptual metrics on TED Gesture and TED Expressive datasets. Supports both short/long video synthesis with skeleton sequence generation conditioned on audio input.
261 stars.
Stars
261
Forks
19
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Advocate99/DiffGesture"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
ModelTC/LightX2V
Light Image Video Generation Inference Framework
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators