maximhq/bifrost

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.

71
/ 100
Verified

Built in Go with a modular plugin architecture, Bifrost offers semantic caching to reduce latency and costs through intelligent response deduplication, plus Model Context Protocol (MCP) support enabling AI models to invoke external tools and APIs. The gateway provides multi-layered governance—hierarchical budget controls, rate limiting, and access control via virtual keys—alongside comprehensive observability through native Prometheus metrics and distributed tracing. It integrates with HashiCorp Vault for secure credential management and supports SSO authentication, making it suitable for enterprise environments requiring strict security and auditability controls.

2,853 stars. Actively maintained with 419 commits in the last 30 days.

No Package No Dependents
Maintenance 25 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

2,853

Forks

303

Language

Go

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

419

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/maximhq/bifrost"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.