JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and Auto-dev Solutions to conduct Agentic RAG from actively selected GitHub public projects. It Augments through LLMs and Generates context for any coding question, in order to streamline the development of sophisticated AI-driven applications.
Implements multi-threaded repository pool caching to minimize GitHub API consumption while supporting flexible LLM provider integration (LangChain-compatible models, custom embedders, and rerankers). Built on asynchronous processing with LLM-powered query analysis to decompose complex questions into effective search strategies, then synthesizes retrieved code snippets and issues into contextual answers via language models.
318 stars and 239 monthly downloads. Available on PyPI.
Stars
318
Forks
22
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 21, 2025
Monthly downloads
239
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/JetXu-LLM/llama-github"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)
curiousily/ragbase
Completely local RAG. Chat with your PDF documents (with open LLM) and UI to that uses...