anything-llm and localcloud

anything-llm
71
Verified
localcloud
24
Experimental
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 2/25
Adoption 8/25
Maturity 9/25
Community 5/25
Stars: 56,148
Forks: 6,071
Downloads:
Commits (30d): 78
Language: JavaScript
License: MIT
Stars: 46
Forks: 2
Downloads:
Commits (30d): 0
Language: Go
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About anything-llm

Mintplex-Labs/anything-llm

The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configration.

Supports document-based RAG through pluggable vector databases (Pinecone, Weaviate, Qdrant, Chroma, etc.) and works with 20+ LLM providers from local (llama.cpp, Ollama) to cloud-based services, plus built-in no-code agent flows and MCP compatibility for extensible tool integrations. Runs as self-contained desktop app or Docker instance with native multi-user access control and document pipelines requiring zero configuration.

About localcloud

localcloud-sh/localcloud

Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.

Deploys containerized PostgreSQL, MongoDB, vector databases, Redis, and quantized AI models through a single CLI tool with Docker orchestration, eliminating setup complexity across language-agnostic environments. Provides standard service APIs (S3-compatible storage, job queues) and includes built-in tunneling for local-to-remote demos and seamless integration with AI coding assistants via non-interactive setup flags. Targets developers prototyping full-stack AI applications, testing pipelines, and teams seeking privacy-preserving development without infrastructure overhead.

Scores updated daily from GitHub, PyPI, and npm data. How scores work