phoenix and openinspector

While both tools provide observability for LLM interactions, Phoenix offers a comprehensive AI observability and evaluation platform that could incorporate the localized, lightweight interception and tracing provided by OpenInspector as a data source or pre-processing layer, suggesting a **complementary** relationship.

phoenix
94
Verified
openinspector
22
Experimental
Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 19/25
Maintenance 13/25
Adoption 0/25
Maturity 9/25
Community 0/25
Stars: 8,847
Forks: 753
Downloads: 1,013,605
Commits (30d): 330
Language: Jupyter Notebook
License:
Stars:
Forks:
Downloads:
Commits (30d): 0
Language: TypeScript
License: Apache-2.0
No risk flags
No Package No Dependents

About phoenix

Arize-ai/phoenix

AI Observability & Evaluation

Provides OpenTelemetry-based tracing, LLM-powered evaluation, versioned datasets, and experiment tracking across LLM frameworks (LangGraph, LlamaIndex, Claude/OpenAI agent SDKs) and providers. Features a web UI with prompt optimization playground, dataset management, and call replay capabilities. Runs locally, in notebooks, or containerized with Helm support, and integrates via auto-instrumentation through the OpenInference standard.

About openinspector

as32608/openinspector

A lightweight, local-first observability proxy and dashboard designed to intercept, log, and trace LLM interactions. OpenInspector acts as a transparent middleman, offering full visibility into agentic workflows, tool executions, and latency metrics without requiring you to change a single line of your application code.

Scores updated daily from GitHub, PyPI, and npm data. How scores work