isEmmanuelOlowe/llm-cost-estimator
Estimating hardware and cloud costs of LLMs and transformer projects
This tool helps machine learning practitioners quickly determine if a large language model (LLM) will fit on a specific GPU setup and estimate its running cost. You input a model from Hugging Face, and it outputs detailed memory usage, suitable GPU recommendations, performance projections, and cloud cost estimates. It's designed for anyone deploying or evaluating LLMs for various applications.
Use this if you need to evaluate the hardware feasibility and budget implications of running a large language model, whether for training or inference.
Not ideal if you require exact, real-world cost and performance figures without any analytical approximations, as results are indicative and should be validated with actual workloads.
Stars
21
Forks
6
Language
TypeScript
License
MIT
Category
Last pushed
Jan 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/isEmmanuelOlowe/llm-cost-estimator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
truefoundry/models
Community-maintained registry of AI/LLM model configurations - pricing, features, and limits...
Mattbusel/LLMTokenStreamQuantEngine
A low-latency, C++-based simulation engine that ingests token streams from an LLM in real-time,...
VincenzoManto/llmtrim
A library for trimming tokens in encoding and decoding in LLM (Large Language Model)...
DarkFoot101/Smart-Product-Pricing
Built a multimodal pricing system combining numerical, text, and image features with...