Medical-RAG-Chatbot and End-to-End-Medical-Chatbot

These are competitors—both implement medical RAG chatbots using LangChain and LLM-based question-answering, but differ in their vector store choice (FAISS vs. Pinecone) and LLM source (Mistral via HuggingFace vs. unspecified), making them alternative solutions for the same use case.

Medical-RAG-Chatbot
32
Emerging
End-to-End-Medical-Chatbot
23
Experimental
Maintenance 10/25
Adoption 1/25
Maturity 9/25
Community 12/25
Maintenance 10/25
Adoption 4/25
Maturity 9/25
Community 0/25
Stars: 1
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 8
Forks:
Downloads:
Commits (30d): 0
Language:
License: MIT
No Package No Dependents
No Package No Dependents

About Medical-RAG-Chatbot

Ratnesh-181998/Medical-RAG-Chatbot

Medical RAG Question-Answering System built using LangChain, FAISS vector store, PyPDF, and Streamlit. Powered by Mistral open-source LLMs (HuggingFace) with custom context-aware chains. Includes a production-grade LLMOps/AIOps pipeline using Docker, Jenkins CI/CD, Aqua Trivy security scanning, and automated deployment on AWS App Runner.

About End-to-End-Medical-Chatbot

mdzaheerjk/End-to-End-Medical-Chatbot

Medical Chatbot using Retrieval-Augmented Generation (RAG) to answer medical queries. PDFs are converted into embeddings and stored in Pinecone. LangChain retrieves context for LLM responses. Built with Flask and deployable on AWS using Docker and GitHub Actions for scalable access.

Scores updated daily from GitHub, PyPI, and npm data. How scores work