YoungSeng/DiffuseStyleGesture

DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)

50
/ 100
Established

Leverages diffusion models with WavLM audio embeddings to generate stylized full-body gestures conditioned on speech, supporting controllable style and intensity parameters. The architecture uses LMDB-based training pipelines on mocap datasets (ZEGGS, BEAT, TWH) and outputs motion in BVH format compatible with Blender visualization. Implements motion matching variants (QPGesture) and multi-dataset training (UnifiedGesture) as downstream extensions, with pre-trained checkpoints available for inference.

206 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

206

Forks

31

Language

Python

License

MIT

Last pushed

Nov 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/YoungSeng/DiffuseStyleGesture"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.