Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
A memory and compute-efficient hierarchical VAE that achieves faster convergence and training stability through optimized architecture design, with pre-trained checkpoints available across multiple image datasets from MNIST to 1024×1024 resolution. Implemented in both PyTorch and JAX to support different computational backends and optimization strategies. Demonstrates state-of-the-art likelihood-based generation performance measured in bits/dimension across benchmarks including CIFAR-10, ImageNet, CelebA, and FFHQ.
199 stars. No commits in the last 6 months.
Stars
199
Forks
26
Language
Python
License
MIT
Category
Last pushed
Aug 15, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Rayhane-mamah/Efficient-VDVAE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its...
jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.