electricpipelines/barq
Dabarqus is incredibly fast RAG that runs everywhere.
Built in C++ with zero external dependencies, Dabarqus bundles vector search, configurable embedding models, and LLM inference into a single binary deployable across Windows, macOS, and Linux with optional NVIDIA CUDA acceleration. The system organizes data into portable semantic indexes ("memory banks") queryable via REST API, CLI, or native Python/JavaScript SDKs, with integrated LLM inference compatible with ChatGPT, Ollama, and other providers.
No commits in the last 6 months.
Stars
59
Forks
7
Language
—
License
—
Category
Last pushed
Jan 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/electricpipelines/barq"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
notadev-iamaura/OneRAG
Production-ready RAG Framework (Python/FastAPI). 1-line config swaps: 6 Vector DBs (Weaviate,...
pinecone-io/canopy
Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone
teilomillet/raggo
A lightweight, production-ready RAG (Retrieval Augmented Generation) library in Go.
MERakram/Advanced-RAG-monorepo
🚀 Production-ready modular RAG monorepo: Local LLM inference (vLLM) • Hybrid retrieval with...
balavenkatesh3322/rag-doctor
🩺 Agentic RAG pipeline failure diagnosis tool. Tells you why your RAG failed — chunk...