Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Operates as a reverse proxy AI gateway that intercepts requests to 100+ LLM providers through a unified OpenAI-compatible API, enabling intelligent routing and automatic fallbacks. Built on a microservices architecture with a Cloudflare Workers proxy layer for request interception, Express-based collection server (Jawn), ClickHouse for analytics, and Supabase for application data. Integrates with OpenAI, Anthropic, Gemini, LangChain, Vercel AI SDK, and supports self-hosting via Docker or Helm with optional async logging through OpenLLMetry.
5,237 stars and 292 monthly downloads. Actively maintained with 7 commits in the last 30 days. Available on npm.
Stars
5,237
Forks
494
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 07, 2026
Monthly downloads
292
Commits (30d)
7
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Helicone/helicone"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...
algorithmicsuperintelligence/optillm
Optimizing inference proxy for LLMs