xnought/vq-vae-explainer
Interactive VQ-VAE (Vector-Quantized Variational Autoencoder) in the browser
This interactive tool helps you understand how Vector-Quantized Variational Autoencoders (VQ-VAEs) work. You input an image, and it shows you step-by-step how the VQ-VAE processes and reconstructs it, highlighting the 'bottleneck' where information is compressed into discrete codes. This is ideal for students, researchers, or anyone learning about deep learning and generative models.
No commits in the last 6 months.
Use this if you want to visually and interactively explore the inner workings of a VQ-VAE model with your own images.
Not ideal if you are looking for a tool to train a VQ-VAE from scratch or to apply it to large datasets for practical image generation tasks.
Stars
8
Forks
—
Language
Jupyter Notebook
License
—
Last pushed
Oct 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xnought/vq-vae-explainer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Naresh1318/Adversarial_Autoencoder
A wizard's guide to Adversarial Autoencoders
mseitzer/pytorch-fid
Compute FID scores with PyTorch.
acids-ircam/RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
ratschlab/aestetik
AESTETIK: Convolutional autoencoder for learning spot representations from spatial...
jaanli/variational-autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)