amazon-bedrock-rag and rag-using-langchain-amazon-bedrock-and-opensearch
These are complements: the first uses Bedrock's managed Knowledge Bases service for turnkey RAG, while the second provides a flexible, open-source alternative using LangChain to orchestrate Bedrock LLMs with self-managed OpenSearch vector storage.
About amazon-bedrock-rag
aws-samples/amazon-bedrock-rag
Fully managed RAG solution implemented using Knowledge Bases for Amazon Bedrock
Implements RAG with dual data sources (S3 documents and web crawling), using Amazon OpenSearch Serverless for vector storage and automatic document chunking with Titan Embeddings. Provides a complete Q&A chatbot application with multi-turn conversation support, model selection UI, and citation tracking—deployed via AWS CDK with API Gateway access controls and built-in security hardening.
About rag-using-langchain-amazon-bedrock-and-opensearch
aws-samples/rag-using-langchain-amazon-bedrock-and-opensearch
RAG with langchain using Amazon Bedrock and Amazon OpenSearch
Implements semantic search by generating Titan embeddings for documents stored in OpenSearch's vector engine, then uses LangChain to retrieve relevant context and augment prompts sent to Bedrock foundation models. Supports pluggable model selection across providers (Anthropic Claude, AI21 Jurassic) via command-line parameters, with optional multi-tenant isolation for data filtering during retrieval.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work