Interactive-RAG and Google-Cloud-RAG-Langchain

These are complementary tools that serve different use cases within the MongoDB-LangChain RAG ecosystem: one provides an interactive, parameter-tuning interface for experimenting with RAG systems, while the other demonstrates a production-ready integration pattern using Google Cloud infrastructure.

Maintenance 6/25
Adoption 8/25
Maturity 16/25
Community 17/25
Maintenance 0/25
Adoption 7/25
Maturity 9/25
Community 18/25
Stars: 42
Forks: 11
Downloads:
Commits (30d): 0
Language: JavaScript
License: Apache-2.0
Stars: 26
Forks: 12
Downloads:
Commits (30d): 0
Language: TypeScript
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About Interactive-RAG

ranfysvalle02/Interactive-RAG

An interactive RAG agent built with LangChain and MongoDB Atlas. Manage your knowledge base, switch embedding models, and tune retrieval parameters on-the-fly through a conversational interface.

Leverages MongoDB's document model to store text, metadata, and multiple embedding vectors in self-contained JSON documents, eliminating fragmented data architectures and enabling A/B testing of embedding models without migration. Integrates Firecrawl for LLM-ready web scraping, LangChain's `RecursiveCharacterTextSplitter` for semantic chunking, and provides runtime tuning of `min_rel_score` and `num_sources` parameters directly through conversational commands. Supports atomic document updates and session-based knowledge isolation, treating the knowledge base as a mutable entity rather than a static index.

About Google-Cloud-RAG-Langchain

mongodb-developer/Google-Cloud-RAG-Langchain

RAG Chat Assistant with MongoDB Atlas, Google Cloud and Langchain

Implements vector search on MongoDB Atlas with Google Cloud's Vertex AI embeddings and Gemini LLM, processing PDF documents through an automated embedding pipeline. The Angular frontend communicates with an Express.js backend via Langchain, enabling togglable RAG vs. retrieval-only modes for context-aware question answering. Demonstrates end-to-end semantic search by storing vectorized documents in Atlas and executing euclidean-similarity queries against indexed embedding fields.

Scores updated daily from GitHub, PyPI, and npm data. How scores work