ollama_pdf_rag and vector-search-nodejs

These are competitors: both implement RAG pipelines for PDF document chatting, with A using Ollama for local inference and B using LangChain with Couchbase for vector storage, offering alternative architectural approaches to the same use case.

ollama_pdf_rag
61
Established
vector-search-nodejs
24
Experimental
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 10/25
Adoption 5/25
Maturity 1/25
Community 8/25
Stars: 496
Forks: 189
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 9
Forks: 1
Downloads:
Commits (30d): 0
Language: TypeScript
License:
No Package No Dependents
No License No Package No Dependents

About ollama_pdf_rag

tonykipkemboi/ollama_pdf_rag

A full-stack demo showcasing a local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.

Implements a LangChain + ChromaDB vector pipeline with Ollama for embeddings and inference, eliminating cloud dependencies entirely. Offers three distinct interfaces—Next.js with REST API, Streamlit, and Jupyter notebooks—plus multi-PDF support with source citation tracking and multi-query retrieval strategies. Architecture combines FastAPI backend for document ingestion and RAG queries with a modern React frontend, enabling both programmatic and interactive exploration of document collections.

About vector-search-nodejs

couchbase-examples/vector-search-nodejs

A RAG demo using LangChain that allows you to chat with your uploaded PDF documents

Scores updated daily from GitHub, PyPI, and npm data. How scores work