rag-with-amazon-postgresql-using-pgvector-and-sagemaker and rag-with-amazon-opensearch-and-sagemaker

These are ecosystem siblings—both are reference implementations of RAG pipelines using SageMaker for embeddings and LLMs, but they demonstrate the pattern with different vector database backends (PostgreSQL with pgvector versus OpenSearch), allowing users to choose based on their existing infrastructure or requirements.

Maintenance 0/25
Adoption 6/25
Maturity 9/25
Community 15/25
Maintenance 0/25
Adoption 7/25
Maturity 9/25
Community 9/25
Stars: 16
Forks: 5
Downloads:
Commits (30d): 0
Language: Python
License: MIT-0
Stars: 29
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: MIT-0
Archived Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About rag-with-amazon-postgresql-using-pgvector-and-sagemaker

aws-samples/rag-with-amazon-postgresql-using-pgvector-and-sagemaker

Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector

About rag-with-amazon-opensearch-and-sagemaker

aws-samples/rag-with-amazon-opensearch-and-sagemaker

Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Service

Implements retrieval-augmented generation by storing document embeddings in OpenSearch and dynamically retrieving relevant passages to augment LLM prompts, addressing token limits and improving answer accuracy. Deploys SageMaker endpoints for both text generation and embedding creation, with infrastructure-as-code (CDK) for the full stack including OpenSearch clusters and credential management. Provides a complete end-to-end workflow from data ingestion through a Streamlit frontend, leveraging LangChain for orchestration.

Scores updated daily from GitHub, PyPI, and npm data. How scores work