py-sandy/llm-web-relay
A FastAPI gateway for local LLMs that adds intelligent web research, multilingual recency/how-to detection, time-anchored guidance, context injection, and OpenAI-compatible SSE streaming. Turn any local model into a recency-aware, context-enhanced assistant instantly.
Stars
4
Forks
—
Language
Python
License
—
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/py-sandy/llm-web-relay"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langbot-app/LangBot
Production-grade platform for building agentic IM bots - 生产级多平台智能机器人开发平台. 提供 Agent、知识库编排、插件系统 /...
open-webui/open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
cactus-compute/cactus
Low-latency AI engine for mobile devices & wearables
sigoden/aichat
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with...
rudrankriyam/Foundation-Models-Framework-Example
Example apps for Foundation Models Framework in iOS 26 and macOS 26