anything-llm and llm-app

These are complementary tools: Mintplex-Labs/anything-llm provides the all-in-one interface and orchestration layer for local LLM deployment, while pathwaycom/llm-app supplies the real-time data integration and pipeline infrastructure needed to feed dynamic, continuously-synced sources into those LLM applications.

anything-llm
71
Verified
llm-app
65
Established
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 10/25
Adoption 14/25
Maturity 25/25
Community 16/25
Stars: 56,148
Forks: 6,071
Downloads:
Commits (30d): 78
Language: JavaScript
License: MIT
Stars: 56,145
Forks: 1,375
Downloads: 89
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
No risk flags

About anything-llm

Mintplex-Labs/anything-llm

The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configration.

Supports document-based RAG through pluggable vector databases (Pinecone, Weaviate, Qdrant, Chroma, etc.) and works with 20+ LLM providers from local (llama.cpp, Ollama) to cloud-based services, plus built-in no-code agent flows and MCP compatibility for extensible tool integrations. Runs as self-contained desktop app or Docker instance with native multi-user access control and document pipelines requiring zero configuration.

About llm-app

pathwaycom/llm-app

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

Built on the Pathway Live Data framework (a Python library with Rust engine), these templates eliminate infrastructure fragmentation by bundling vector indexing via usearch, full-text search via Tantivy, and HTTP API serving into a single unified runtime—no separate vector database, cache, or API framework needed. The pipelines automatically synchronize with multiple data sources and expose REST endpoints that trigger real-time re-indexing on any data change, enabling production RAG applications that scale to millions of documents without manual orchestration.

Scores updated daily from GitHub, PyPI, and npm data. How scores work