MahanVeisi8/VAE-MNIST-Variable-Latent-Size-Reconstruction-and-Visualization
Dive into the world of Variational Autoencoders (VAEs) with MNIST! 🎨✨ Explore variable latent sizes (2, 4, 16) to see how they affect reconstruction, latent space visualizations, and performance metrics 📊 (MSE, SSIM, PSNR).
No commits in the last 6 months.
Stars
10
Forks
—
Language
Jupyter Notebook
License
—
Last pushed
Jan 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MahanVeisi8/VAE-MNIST-Variable-Latent-Size-Reconstruction-and-Visualization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mseitzer/pytorch-fid
Compute FID scores with PyTorch.
Naresh1318/Adversarial_Autoencoder
A wizard's guide to Adversarial Autoencoders
ratschlab/aestetik
AESTETIK: Convolutional autoencoder for learning spot representations from spatial...
acids-ircam/RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
jaanli/variational-autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)