rohitrango/objects-that-sound
Unofficial Implementation of Google Deepmind's paper `Objects that Sound`
Trains AVENet on AudioSet to learn joint audio-visual embeddings through unsupervised correspondence learning, enabling cross-modal retrieval without labeled data. The architecture encodes video frames and spectrograms into a shared embedding space, allowing queries like "find images matching this sound" or vice versa via Euclidean distance on learned representations. Implements the full training pipeline on a 46k-video subset with handling for real-world dataset challenges like temporal misalignment, class imbalance, and multi-label noise.
No commits in the last 6 months.
Stars
83
Forks
16
Language
Python
License
—
Category
Last pushed
May 07, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/rohitrango/objects-that-sound"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
reppertj/earworm
Search for music by sonic similarity
prabal-rje/latentscore
Python library for generating ambient music from text descriptions. No GPU required. Turn text...
erdogant/KNRscore
KNRScore is a Python package for computing K-Nearest-Rank Similarity, a metric that quantifies...
drscotthawley/audio-algebra
alchemy with embeddings
FabianGroeger96/deep-embedded-music
Creation of an embedding space using unsupervised triplet loss and Tile2Vec that can be used for...