Medical-RAG-Chatbot and End-to-End-Medical-Chatbot
These are competitors—both implement medical RAG chatbots using LangChain and LLM-based question-answering, but differ in their vector store choice (FAISS vs. Pinecone) and LLM source (Mistral via HuggingFace vs. unspecified), making them alternative solutions for the same use case.
About Medical-RAG-Chatbot
Ratnesh-181998/Medical-RAG-Chatbot
Medical RAG Question-Answering System built using LangChain, FAISS vector store, PyPDF, and Streamlit. Powered by Mistral open-source LLMs (HuggingFace) with custom context-aware chains. Includes a production-grade LLMOps/AIOps pipeline using Docker, Jenkins CI/CD, Aqua Trivy security scanning, and automated deployment on AWS App Runner.
About End-to-End-Medical-Chatbot
mdzaheerjk/End-to-End-Medical-Chatbot
Medical Chatbot using Retrieval-Augmented Generation (RAG) to answer medical queries. PDFs are converted into embeddings and stored in Pinecone. LangChain retrieves context for LLM responses. Built with Flask and deployable on AWS using Docker and GitHub Actions for scalable access.
Scores updated daily from GitHub, PyPI, and npm data. How scores work