skye-harris/hass_local_openai_llm
Home Assistant LLM integration for local OpenAI-compatible services (llamacpp, vllm, etc)
Supports streaming responses, temperature tuning, parallel tool calling, and RAG integration via Weaviate for context-aware responses. The integration wraps OpenAI-compatible APIs with Home Assistant-specific features like conversation history trimming, emoji/thinking-tag stripping, image input/generation for tasks, and dynamic date-time role injection. Designed for Home Assistant's Assist and conversation agents, requiring at least 8k context windows and leveraging compatible inference servers' native tool-calling capabilities.
100 stars.
Stars
100
Forks
14
Language
Python
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/skye-harris/hass_local_openai_llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langbot-app/LangBot
Production-grade platform for building agentic IM bots - 生产级多平台智能机器人开发平台. 提供 Agent、知识库编排、插件系统 /...
open-webui/open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
cactus-compute/cactus
Low-latency AI engine for mobile devices & wearables
rudrankriyam/Foundation-Models-Framework-Example
Example apps for Foundation Models Framework in iOS 26 and macOS 26
sigoden/aichat
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with...