arc53/llm-price-compass
This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient LLM GPU selections and cost-effective AI models. LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table
Aggregates real-world GPU benchmark data across self-hosted infrastructure and managed LLM providers into standardized CSV datasets, enabling direct cost-per-token comparisons across deployment models. The platform normalizes performance metrics and pricing from heterogeneous sources—cloud GPUs, inference APIs, and on-premise hardware—into a unified comparison framework accessible via an interactive dashboard. Supports evaluation of specific model variants (e.g., Llama 3.1 8B/70B) with community-contributed benchmarks and pricing snapshots.
224 stars. No commits in the last 6 months.
Stars
224
Forks
10
Language
TypeScript
License
MIT
Category
Last pushed
Dec 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arc53/llm-price-compass"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
isEmmanuelOlowe/llm-cost-estimator
Estimating hardware and cloud costs of LLMs and transformer projects
saqibameen/model-cost
Compare LLM API pricing from your terminal. Supports 300+ models across all major providers....
WilliamJlvt/llm_price_scraper
A simple Python Scraper to retrieve pricing information for Large Language Models (LLMs) from an...
quarkloop/llmcost
This repository is no longer actively maintained. Please use https://github.com/quarkloop/ai instead.
truefoundry/models
Community-maintained registry of AI/LLM model configurations - pricing, features, and limits...