chaitanya100100/VAE-for-Image-Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Implements encoder-decoder architectures tailored to input dimensionality: fully-connected networks for flattened MNIST images and convolutional/deconvolutional pairs for CIFAR10's spatial structure. Provides interactive latent space exploration tools for 2D and 3D visualizations with user-controlled sampling, plus automated image generation from random latent vectors. Built on TensorFlow/Keras backend with modular training scripts parameterized by latent dimensions and intermediate layer sizes.
122 stars. No commits in the last 6 months.
Stars
122
Forks
24
Language
Python
License
MIT
Category
Last pushed
Oct 22, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/chaitanya100100/VAE-for-Image-Generation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
jxhe/vae-lagging-encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational...
taldatech/soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE:...
Rayhane-mamah/Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
lavinal712/AutoencoderKL
Train Your VAE: A VAE Training and Finetuning Script for SD/FLUX
zelaki/eqvae
[ICML'25] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling.