gc-qa-rag and rag-evaluation

gc-qa-rag
54
Established
rag-evaluation
29
Experimental
Maintenance 10/25
Adoption 9/25
Maturity 15/25
Community 20/25
Maintenance 0/25
Adoption 6/25
Maturity 8/25
Community 15/25
Stars: 71
Forks: 24
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 17
Forks: 4
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
No Package No Dependents
No License Stale 6m No Package No Dependents

About gc-qa-rag

GrapeCity-AI/gc-qa-rag

A RAG (Retrieval-Augmented Generation) solution Based on Advanced Pre-generated QA Pairs. 基于高级 QA 问答对预生成的 RAG 知识库解决方案

This system helps organizations transform unstructured documents like product manuals or forum posts into a high-quality, searchable question-and-answer knowledge base. It takes various document types (PDF, Word, Markdown) and processes them into precise QA pairs, summaries, and related questions, which can then be used to power an intelligent chatbot. Support teams, customer service managers, or anyone needing to quickly find answers within large volumes of organizational content would use this.

knowledge-management customer-support technical-documentation information-retrieval enterprise-search

About rag-evaluation

0xshre/rag-evaluation

A QA RAG system that uses a custom chromadb to retrieve relevant passages and then uses an LLM to generate the answer.

This project helps evaluate and improve question-answering systems built using Retrieval-Augmented Generation (RAG). You feed in documents and questions, and it generates answers while also providing a detailed report on how accurate and relevant the answers are. It's for data scientists and AI engineers who are developing or fine-tuning RAG-based chatbots or knowledge retrieval tools.

AI-development Natural-Language-Processing Knowledge-retrieval ML-evaluation Question-Answering-systems

Scores updated daily from GitHub, PyPI, and npm data. How scores work