IshaanLabs/azure-local-foundry-rag-demo
A fully local RAG application using Azure Local Foundry, LangChain, and ChromaDB. It runs optimized LLMs offline, enabling private semantic search and RetrievalQA workflows on your device. Designed to be simple, fast, and entirely cloud-independent.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/IshaanLabs/azure-local-foundry-rag-demo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)