YoungSeng/DiffuseStyleGesture
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
Leverages diffusion models with WavLM audio embeddings to generate stylized full-body gestures conditioned on speech, supporting controllable style and intensity parameters. The architecture uses LMDB-based training pipelines on mocap datasets (ZEGGS, BEAT, TWH) and outputs motion in BVH format compatible with Blender visualization. Implements motion matching variants (QPGesture) and multi-dataset training (UnifiedGesture) as downstream extensions, with pre-trained checkpoints available for inference.
206 stars.
Stars
206
Forks
31
Language
Python
License
MIT
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/YoungSeng/DiffuseStyleGesture"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
ModelTC/LightX2V
Light Image Video Generation Inference Framework
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators