aws-samples/rag-with-amazon-opensearch-and-sagemaker

Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Service

25
/ 100
Experimental

Implements retrieval-augmented generation by storing document embeddings in OpenSearch and dynamically retrieving relevant passages to augment LLM prompts, addressing token limits and improving answer accuracy. Deploys SageMaker endpoints for both text generation and embedding creation, with infrastructure-as-code (CDK) for the full stack including OpenSearch clusters and credential management. Provides a complete end-to-end workflow from data ingestion through a Streamlit frontend, leveraging LangChain for orchestration.

No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 9 / 25
Community 9 / 25

How are scores calculated?

Stars

29

Forks

3

Language

Python

License

MIT-0

Last pushed

Dec 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/aws-samples/rag-with-amazon-opensearch-and-sagemaker"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.