anything-llm and localcloud
About anything-llm
Mintplex-Labs/anything-llm
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configration.
Supports document-based RAG through pluggable vector databases (Pinecone, Weaviate, Qdrant, Chroma, etc.) and works with 20+ LLM providers from local (llama.cpp, Ollama) to cloud-based services, plus built-in no-code agent flows and MCP compatibility for extensible tool integrations. Runs as self-contained desktop app or Docker instance with native multi-user access control and document pipelines requiring zero configuration.
About localcloud
localcloud-sh/localcloud
Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.
Deploys containerized PostgreSQL, MongoDB, vector databases, Redis, and quantized AI models through a single CLI tool with Docker orchestration, eliminating setup complexity across language-agnostic environments. Provides standard service APIs (S3-compatible storage, job queues) and includes built-in tunneling for local-to-remote demos and seamless integration with AI coding assistants via non-interactive setup flags. Targets developers prototyping full-stack AI applications, testing pipelines, and teams seeking privacy-preserving development without infrastructure overhead.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work