jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)
Decouples encoder and decoder optimization to perform multiple inference network updates per iteration, mitigating posterior collapse without architectural changes. Includes training dynamics visualization via "posterior mean space" projections and supports both text generation (with greedy/beam/sampling strategies) and image modeling across multiple datasets (Yahoo, Yelp, Omniglot). Provides configurable KL annealing schedules and aggressive training modes to analyze VAE convergence behavior.
186 stars. No commits in the last 6 months.
Stars
186
Forks
33
Language
Python
License
MIT
Category
Last pushed
Dec 15, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/jxhe/vae-lagging-encoder"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.