archinetai/audio-diffusion-pytorch
Audio generation using diffusion models, in PyTorch.
Supports unconditional and text-conditional generation with T5 embeddings, diffusion-based upsampling/vocoding, and autoencoding with learnable latents. Built on dimension-agnostic U-Net and diffusion primitives via the `a-unet` library, with configurable noise schedules (V-diffusion) and sampling strategies. Integrates with Hugging Face transformers for text conditioning and supports custom encoders for latent compression.
2,094 stars and 1,314 monthly downloads. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Stars
2,094
Forks
178
Language
Python
License
MIT
Category
Last pushed
Jun 12, 2023
Monthly downloads
1,314
Commits (30d)
0
Dependencies
6
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/archinetai/audio-diffusion-pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
probabilists/azula
Diffusion models in PyTorch