Helicone/helicone

🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓

81
/ 100
Verified

Operates as a reverse proxy AI gateway that intercepts requests to 100+ LLM providers through a unified OpenAI-compatible API, enabling intelligent routing and automatic fallbacks. Built on a microservices architecture with a Cloudflare Workers proxy layer for request interception, Express-based collection server (Jawn), ClickHouse for analytics, and Supabase for application data. Integrates with OpenAI, Anthropic, Gemini, LangChain, Vercel AI SDK, and supports self-hosting via Docker or Helm with optional async logging through OpenLLMetry.

5,237 stars and 292 monthly downloads. Actively maintained with 7 commits in the last 30 days. Available on npm.

Maintenance 20 / 25
Adoption 16 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

5,237

Forks

494

Language

TypeScript

License

Apache-2.0

Last pushed

Mar 07, 2026

Monthly downloads

292

Commits (30d)

7

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Helicone/helicone"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.