LLM-RAG-Architecture and dotnet-rag-api
These are ecosystem siblings—one provides a generalizable RAG architecture reference implementation while the other is a specialized .NET 8 API implementation that could adopt or be compared against that architecture pattern.
About LLM-RAG-Architecture
matt-bentley/LLM-RAG-Architecture
Production-grade Retrieval Augmented Generation (RAG) architecture using Open Source components
Implements hybrid search combining dense embeddings (BAAI/bge-small-en-v1.5) with BM25 sparse vectors through Reciprocal Rank Fusion in Qdrant, plus cross-encoder reranking for result quality. Built on .NET with Semantic Kernel orchestration, integrating FastAPI Python services for embeddings and reranking, with support for multiple LLM backends (Azure OpenAI, OpenAI, Ollama) and PdfPig-based document extraction strategies.
About dotnet-rag-api
Argha713/dotnet-rag-api
A production-ready RAG (Retrieval-Augmented Generation) API built with .NET 8. Upload documents, ask questions, and get AI-powered answers with source citations and streaming support.
Scores updated daily from GitHub, PyPI, and npm data. How scores work