mdeff/fma
FMA: A Dataset For Music Analysis
Distributes 917 GiB of Creative Commons audio across tiered subsets (8K to 106K tracks) with hierarchical genre taxonomy and pre-computed librosa + Echonest features in CSV format. Includes Jupyter notebooks for baseline genre classification, feature extraction, and metadata analysis alongside pandas-compatible track/genre/feature tables. Targets music information retrieval (MIR) tasks and integrates with TensorFlow for neural network training on GPU infrastructure.
2,569 stars. No commits in the last 6 months.
Stars
2,569
Forks
457
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 05, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mdeff/fma"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
Natooz/MidiTok
MIDI / symbolic music tokenizers for Deep Learning models 🎶
salu133445/muspy
A toolkit for symbolic music generation
jacbz/Lofi
ML-supported lo-fi music generator
jisungk/deepjazz
Deep learning driven jazz generation using Keras & Theano!
icoxfog417/magenta_session
:musical_keyboard: Music Session with Google Magenta