LazyAGI/LazyLLM
Easiest and laziest way for building multi-agent LLMs applications.
Provides modular components (agents, retrievers, embedders, fine-tuning) that assemble into multi-agent applications using a dataflow approach, with automatic deployment to diverse platforms (bare-metal, Kubernetes, cloud) via lightweight gateways. Unifies APIs across heterogeneous backends—online models (OpenAI) and local models (LLaMA, InternLM), inference engines (vLLM, LightLLM), and databases (vector, relational, document)—enabling runtime model/framework switching without code changes. Supports in-app model fine-tuning with automatic framework selection and optimization for iterative performance improvement through feedback loops.
3,747 stars. Actively maintained with 10 commits in the last 30 days.
Stars
3,747
Forks
364
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/LazyAGI/LazyLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
WangRongsheng/awesome-LLM-resources
🧑🚀 全世界最好的LLM资料总结(多模态生成、Agent、辅助编程、AI审稿、数据处理、模型训练、模型推理、o1 模型、MCP、小语言模型、视觉语言模型) | Summary of the...
katanaml/sparrow
Structured data extraction and instruction calling with ML, LLM and Vision LLM
luhengshiwo/LLMForEverybody
每个人都能看懂的大模型知识分享,LLMs春/秋招大模型面试前必看,让你和面试官侃侃而谈
SylphAI-Inc/AdalFlow
AdalFlow: The library to build & auto-optimize LLM applications.
PacktPublishing/LLM-Engineers-Handbook
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS...