RAG-Based-LLM-Chatbot and openvino-llm-chatbot-rag
These are competitors: both are complete RAG chatbot implementations that independently combine a local LLM with retrieval and embedding components, differing primarily in their choice of inference engine (Llama 3.2 vs. OpenVINO) rather than filling complementary roles.
Maintenance
0/25
Adoption
6/25
Maturity
9/25
Community
17/25
Maintenance
0/25
Adoption
4/25
Maturity
1/25
Community
16/25
Stars: 17
Forks: 10
Downloads: —
Commits (30d): 0
Language: Python
License: MIT
Stars: 7
Forks: 7
Downloads: —
Commits (30d): 0
Language: Python
License: —
Stale 6m
No Package
No Dependents
No License
Stale 6m
No Package
No Dependents
About RAG-Based-LLM-Chatbot
GURPREETKAURJETHRA/RAG-Based-LLM-Chatbot
RAG Based LLM Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)
About openvino-llm-chatbot-rag
yas-sim/openvino-llm-chatbot-rag
LLM chatbot example using OpenVINO with RAG (Retrieval Augmented Generation).
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work