jishengpeng/WavTokenizer

[ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling

44
/ 100
Emerging

Implements a dual-encoder architecture with bandwidth-scalable quantization to convert raw audio into discrete tokens while preserving semantic information for downstream language models. Supports multiple model sizes (small/medium/large) trained on diverse corpora from speech to music, with pretrained checkpoints available on Hugging Face and PyTorch Lightning integration for custom training pipelines.

1,279 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

1,279

Forks

111

Language

Python

License

MIT

Last pushed

Mar 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/jishengpeng/WavTokenizer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.