byte5ai/palaia
Palaia — Local, crash-safe memory for AI agents. Semantic vector search (fastembed/OpenAI/Ollama). SQLite + sqlite-vec or PostgreSQL + pgvector. MCP server for Claude Desktop & Cursor. Multi-agent. Auto-capture.
4 stars and 5,733 monthly downloads. Available on PyPI.
Stars
4
Forks
2
Language
Python
License
MIT
Category
Last pushed
Apr 03, 2026
Monthly downloads
5,733
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/byte5ai/palaia"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
ddickmann/vllm-factory
Production inference for encoder models - ColBERT, GLiNER, ColPali, embeddings etc. - as vLLM...
j33pguy/magi
MAGI — Multi-Agent Graph Intelligence. Universal memory server for AI agents. MCP + gRPC + REST...
LLMSystems/TensorrtServer
A high-performance deep learning model inference server based on TensorRT, supporting fast...
abdullah85398/embedding-server
A high-performance, self-hosted, model-agnostic embedding service designed for LLM applications,...
thetenzinwoser/recall-mcp
Local semantic search MCP server for markdown docs and Granola meeting transcripts. No API keys,...