ollama_pdf_rag and vector-search-nodejs
These are competitors: both implement RAG pipelines for PDF document chatting, with A using Ollama for local inference and B using LangChain with Couchbase for vector storage, offering alternative architectural approaches to the same use case.
About ollama_pdf_rag
tonykipkemboi/ollama_pdf_rag
A full-stack demo showcasing a local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.
Implements a LangChain + ChromaDB vector pipeline with Ollama for embeddings and inference, eliminating cloud dependencies entirely. Offers three distinct interfaces—Next.js with REST API, Streamlit, and Jupyter notebooks—plus multi-PDF support with source citation tracking and multi-query retrieval strategies. Architecture combines FastAPI backend for document ingestion and RAG queries with a modern React frontend, enabling both programmatic and interactive exploration of document collections.
About vector-search-nodejs
couchbase-examples/vector-search-nodejs
A RAG demo using LangChain that allows you to chat with your uploaded PDF documents
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work