IntelLabs/fastRAG
Efficient Retrieval Augmentation and Generation Framework
Archived**Technical Summary:** Built on Haystack v2, fastRAG provides optimized RAG components including ColBERT with PLAID indexing for token-level late interaction, Fusion-in-Decoder for multi-document generation, and REPLUG for improved decoding. It integrates with Intel hardware acceleration (IPEX, Optimum-Intel, Optimum-Habana) and alternative inference backends like ONNX Runtime, OpenVINO, and Llama-CPP, enabling efficient LLM inference on Xeon processors and Gaudi accelerators. The framework bundles quantized embedders, sparse rerankers, and vector stores (FAISS, Qdrant, Elasticsearch) as drop-in Haystack components.
1,768 stars.
Stars
1,768
Forks
165
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/IntelLabs/fastRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation &...
IntelLabs/RAG-FiT
Framework for enhancing LLMs for RAG tasks using fine-tuning.
jxzhangjhu/Awesome-LLM-RAG
Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models
coree/awesome-rag
A curated list of retrieval-augmented generation (RAG) in large language models
ibm-self-serve-assets/Blended-RAG
Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and...