InternLM/HuixiangDou

HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance

63
/ 100
Established

Implements a three-stage pipeline (preprocess, rejection, response) using hybrid retrieval combining dense embeddings for documents, sparse methods for code, and knowledge graphs to intelligently filter irrelevant group chat messages and generate contextual responses. Integrates with multiple LLM providers (DeepSeek, InternLM, GLM) and chat platforms (WeChat, Lark, Read the Docs) while supporting multimodal inputs including document OCR, image-text retrieval, and coreference resolution for context understanding.

2,481 stars and 12 monthly downloads. Available on PyPI.

Maintenance 6 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

2,481

Forks

182

Language

Python

License

BSD-3-Clause

Last pushed

Nov 24, 2025

Monthly downloads

12

Commits (30d)

0

Dependencies

37

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/InternLM/HuixiangDou"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.