fahdmirza/doclingwithollama
Docling with Ollama - RAG on Local Files with Local Models
Integrates Docling's advanced document parsing with Ollama's local LLM inference and LlamaIndex for RAG pipelines, using HuggingFace embeddings for semantic search across uploaded documents. Provides a Streamlit web interface for interactive document chat without external API dependencies, supporting multiple file formats beyond PDFs through Docling's extensible parser architecture.
No commits in the last 6 months.
Stars
87
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/fahdmirza/doclingwithollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)