helicone and anchoring-ai

These are complements: Helicone provides monitoring and evaluation infrastructure for LLM applications, while Anchoring AI provides a no-code platform for building and hosting those applications, so teams would use both together in their LLM development workflow.

helicone
81
Verified
anchoring-ai
46
Emerging
Maintenance 20/25
Adoption 16/25
Maturity 25/25
Community 20/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 5,237
Forks: 494
Downloads: 292
Commits (30d): 7
Language: TypeScript
License: Apache-2.0
Stars: 155
Forks: 30
Downloads:
Commits (30d): 0
Language: JavaScript
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About helicone

Helicone/helicone

🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓

Operates as a reverse proxy AI gateway that intercepts requests to 100+ LLM providers through a unified OpenAI-compatible API, enabling intelligent routing and automatic fallbacks. Built on a microservices architecture with a Cloudflare Workers proxy layer for request interception, Express-based collection server (Jawn), ClickHouse for analytics, and Supabase for application data. Integrates with OpenAI, Anthropic, Gemini, LangChain, Vercel AI SDK, and supports self-hosting via Docker or Helm with optional async logging through OpenLLMetry.

About anchoring-ai

AnchoringAI/anchoring-ai

An open-source no-code tool for teams to collaborate on building, evaluating, and hosting applications leveraging GPT and other large language models. You could easily build and share LLM-powered apps, manage your budget and run batch jobs.

Features include prompt chain management with drag-and-drop composition, optimized response caching to reduce API costs, and modular extensibility for custom models and datasets. The architecture uses a Node.js/React frontend with a Python backend (Flask/SQLAlchemy), MySQL for persistence, Redis for caching and task queuing, and Celery for asynchronous batch processing. It integrates directly with Langchain for Python-based prompt chains and supports multiple LLM providers beyond GPT.

Scores updated daily from GitHub, PyPI, and npm data. How scores work