jishengpeng/WavTokenizer
[ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling
Implements a dual-encoder architecture with bandwidth-scalable quantization to convert raw audio into discrete tokens while preserving semantic information for downstream language models. Supports multiple model sizes (small/medium/large) trained on diverse corpora from speech to music, with pretrained checkpoints available on Hugging Face and PyTorch Lightning integration for custom training pipelines.
1,279 stars. No commits in the last 6 months.
Stars
1,279
Forks
111
Language
Python
License
MIT
Category
Last pushed
Mar 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/jishengpeng/WavTokenizer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shangeth/wavencoder
WavEncoder is a Python library for encoding audio signals, transforms for audio augmentation,...
fatchord/WaveRNN
WaveRNN Vocoder + TTS
kan-bayashi/ParallelWaveGAN
Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch
seungwonpark/melgan
MelGAN vocoder (compatible with NVIDIA/tacotron2)
rishikksh20/iSTFTNet-pytorch
iSTFTNet : Fast and Lightweight Mel-spectrogram Vocoder Incorporating Inverse Short-time Fourier...