skye-harris/hass_local_openai_llm

Home Assistant LLM integration for local OpenAI-compatible services (llamacpp, vllm, etc)

42
/ 100
Emerging

Supports streaming responses, temperature tuning, parallel tool calling, and RAG integration via Weaviate for context-aware responses. The integration wraps OpenAI-compatible APIs with Home Assistant-specific features like conversation history trimming, emoji/thinking-tag stripping, image input/generation for tasks, and dynamic date-time role injection. Designed for Home Assistant's Assist and conversation agents, requiring at least 8k context windows and leveraging compatible inference servers' native tool-calling capabilities.

100 stars.

No License No Package No Dependents
Maintenance 13 / 25
Adoption 9 / 25
Maturity 5 / 25
Community 15 / 25

How are scores calculated?

Stars

100

Forks

14

Language

Python

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/skye-harris/hass_local_openai_llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.