Shaivpidadi/refrag
REFRAG: LLM-powered representations for better RAG retrieval. Improve precision, reduce context size, same speed.
Implements micro-chunking (16-32 tokens) with fast encoder-only indexing and query-time compression policies that dynamically mark top-ranked chunks as RAW and lower-ranked ones as compressed keywords. It's model-agnostic and integrates with any LLM via context preparation, supporting sentence-transformers embeddings and currently using in-memory storage with planned vector DB support.
Stars
26
Forks
8
Language
Python
License
MIT
Category
Last pushed
Dec 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Shaivpidadi/refrag"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation &...
IntelLabs/RAG-FiT
Framework for enhancing LLMs for RAG tasks using fine-tuning.
jxzhangjhu/Awesome-LLM-RAG
Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models
coree/awesome-rag
A curated list of retrieval-augmented generation (RAG) in large language models
IntelLabs/fastRAG
Efficient Retrieval Augmentation and Generation Framework