amazon-bedrock-rag and rag-using-langchain-amazon-bedrock-and-opensearch

These are complements: the first uses Bedrock's managed Knowledge Bases service for turnkey RAG, while the second provides a flexible, open-source alternative using LangChain to orchestrate Bedrock LLMs with self-managed OpenSearch vector storage.

Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 195
Forks: 52
Downloads:
Commits (30d): 0
Language: JavaScript
License: MIT-0
Stars: 229
Forks: 45
Downloads:
Commits (30d): 0
Language: Python
License: MIT-0
No Package No Dependents
Stale 6m No Package No Dependents

About amazon-bedrock-rag

aws-samples/amazon-bedrock-rag

Fully managed RAG solution implemented using Knowledge Bases for Amazon Bedrock

Implements RAG with dual data sources (S3 documents and web crawling), using Amazon OpenSearch Serverless for vector storage and automatic document chunking with Titan Embeddings. Provides a complete Q&A chatbot application with multi-turn conversation support, model selection UI, and citation tracking—deployed via AWS CDK with API Gateway access controls and built-in security hardening.

About rag-using-langchain-amazon-bedrock-and-opensearch

aws-samples/rag-using-langchain-amazon-bedrock-and-opensearch

RAG with langchain using Amazon Bedrock and Amazon OpenSearch

Implements semantic search by generating Titan embeddings for documents stored in OpenSearch's vector engine, then uses LangChain to retrieve relevant context and augment prompts sent to Bedrock foundation models. Supports pluggable model selection across providers (Anthropic Claude, AI21 Jurassic) via command-line parameters, with optional multi-tenant isolation for data filtering during retrieval.

Scores updated daily from GitHub, PyPI, and npm data. How scores work