adrianliechti/wingman

Inference Hub for AI at Scale

54
/ 100
Established

Supports multi-provider LLM integration (OpenAI, Anthropic, Gemini, Bedrock, local Ollama) with pluggable document processing pipelines (extractors, segmenters, retrievers) for RAG workflows. Offers modular architecture with built-in tools, Model Context Protocol (MCP) support for external tool servers, and load balancing/rate limiting across providers. Exposes OpenAI-compatible APIs with full OpenTelemetry observability and YAML-based configuration for chains, agents, and complex AI workflows.

No Package No Dependents
Maintenance 13 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

73

Forks

12

Language

Go

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/adrianliechti/wingman"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.