ollama_pdf_rag and ask-my-pdf

These are **competitors**: both implement RAG pipelines for PDF interaction, but tonykipkemboi/ollama_pdf_rag emphasizes local/self-hosted inference while ask-my-pdf prioritizes browser-based execution, representing different deployment architecture choices for the same use case.

ollama_pdf_rag
61
Established
ask-my-pdf
39
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 14/25
Stars: 496
Forks: 189
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 106
Forks: 12
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About ollama_pdf_rag

tonykipkemboi/ollama_pdf_rag

A full-stack demo showcasing a local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.

Implements a LangChain + ChromaDB vector pipeline with Ollama for embeddings and inference, eliminating cloud dependencies entirely. Offers three distinct interfaces—Next.js with REST API, Streamlit, and Jupyter notebooks—plus multi-PDF support with source citation tracking and multi-query retrieval strategies. Architecture combines FastAPI backend for document ingestion and RAG queries with a modern React frontend, enabling both programmatic and interactive exploration of document collections.

About ask-my-pdf

nico-martin/ask-my-pdf

A Webapp that uses Retrieval Augmented Generation (RAG) and Large Language Models to interact with a PDF directly in the browser.

Executes the entire RAG pipeline in-browser using PDF.js for extraction, TransformersJS with all-MiniLM-L6-v2 for semantic vector embedding, and an in-memory VectorDB for cosine similarity retrieval. Generates responses via Gemma 2B/9B models compiled to WebAssembly/WebGPU with MLC LLM, with optional fallback to Google's experimental Prompt API for supported browsers.

Scores updated daily from GitHub, PyPI, and npm data. How scores work