daniel-sampaio-goncalves/KELLMA
KELLMA Search is a full‑stack Python application designed to streamline exploration of PubMed Central (PMC) biomedical literature. It combines a Streamlit interface, with a modular, multi‑stage backend pipeline, local LLM inference through Ollama, and parallelized execution. The system generates persistent reports that include direct article quotes
Stars
—
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 30, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/daniel-sampaio-goncalves/KELLMA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)