KomangAndika/Improved-RAG-Architecture

Improved RAG Architecture using semantic chunker, query input rewriter, and prompt engineering

11
/ 100
Experimental

This helps developers build more effective Retrieval-Augmented Generation (RAG) applications without needing to run large language models locally. It takes user queries and source documents, processes them using advanced techniques, and outputs more accurate, contextually relevant answers. Developers who want to integrate powerful AI question-answering capabilities into their applications would use this.

No commits in the last 6 months.

Use this if you are a developer building a RAG application and want to leverage external LLM APIs and sophisticated text processing for better accuracy and retrieval.

Not ideal if you need extremely fast document chunking or prefer to run all components, including the language models, entirely on local hardware.

AI-application-development NLP-engineering Information-retrieval Generative-AI API-integration
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 3 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

4

Forks

Language

Jupyter Notebook

License

Category

local-rag-stacks

Last pushed

Sep 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/KomangAndika/Improved-RAG-Architecture"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.