ollama_pdf_rag and ask-my-pdf
These are **competitors**: both implement RAG pipelines for PDF interaction, but tonykipkemboi/ollama_pdf_rag emphasizes local/self-hosted inference while ask-my-pdf prioritizes browser-based execution, representing different deployment architecture choices for the same use case.
About ollama_pdf_rag
tonykipkemboi/ollama_pdf_rag
A full-stack demo showcasing a local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.
Implements a LangChain + ChromaDB vector pipeline with Ollama for embeddings and inference, eliminating cloud dependencies entirely. Offers three distinct interfaces—Next.js with REST API, Streamlit, and Jupyter notebooks—plus multi-PDF support with source citation tracking and multi-query retrieval strategies. Architecture combines FastAPI backend for document ingestion and RAG queries with a modern React frontend, enabling both programmatic and interactive exploration of document collections.
About ask-my-pdf
nico-martin/ask-my-pdf
A Webapp that uses Retrieval Augmented Generation (RAG) and Large Language Models to interact with a PDF directly in the browser.
Executes the entire RAG pipeline in-browser using PDF.js for extraction, TransformersJS with all-MiniLM-L6-v2 for semantic vector embedding, and an in-memory VectorDB for cosine similarity retrieval. Generates responses via Gemma 2B/9B models compiled to WebAssembly/WebGPU with MLC LLM, with optional fallback to Google's experimental Prompt API for supported browsers.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work