redis-applied-ai/aws-redis-bedrock-stack

Reference architecture, guides, and examples using Amazon Bedrock and Redis as a knowledge base for RAG.

22
/ 100
Experimental

Implements end-to-end document ingestion with automatic chunking and embedding generation via Bedrock's foundation models, storing vectors in Redis Enterprise Cloud for semantic search during RAG operations. Additionally uses Redis as an LLM cache layer to reduce inference costs and latency. The stack integrates with AWS Secrets Manager for credential management and supports S3 as a document source, enabling agents to dynamically retrieve relevant context during LLM inference.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 1 / 25
Community 15 / 25

How are scores calculated?

Stars

15

Forks

5

Language

License

Last pushed

Oct 21, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/redis-applied-ai/aws-redis-bedrock-stack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.