aravpanwar/Embedding_Comparision
This repository provides a framework to benchmark the performance and efficiency of various Large Language Model (LLM) embedding models. Using an Apache Spark Analysis Report as a sample technical dataset, this project evaluates both local and API-based models on their retrieval accuracy (MRR, Recall@3) and computational latency.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/aravpanwar/Embedding_Comparision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
embeddings-benchmark/results
Data for the MTEB leaderboard
fresh-stack/freshstack
This repository helps you evaluate your models on the FreshStack benchmark!