divagr18/memlayer
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Implements a hybrid vector + knowledge graph architecture using ChromaDB and NetworkX to enable fast semantic search combined with entity relationship traversal. Supports three operation modes (LOCAL/ONLINE/LIGHTWEIGHT) that trade off accuracy, startup time, and cost by varying the salience filtering approach—from ML-based sentence transformers to LLM embeddings to lightweight keyword extraction. Works across all major LLM providers (OpenAI, Claude, Gemini, Ollama, LMStudio) with intelligent multi-tier search (Fast/Balanced/Deep) that automatically adjusts retrieval depth based on query complexity.
261 stars and 875 monthly downloads. Available on PyPI.
Stars
261
Forks
32
Language
Python
License
MIT
Category
Last pushed
Feb 02, 2026
Monthly downloads
875
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/divagr18/memlayer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
topoteretes/cognee
Knowledge Engine for AI Agent Memory in 6 lines of code
verygoodplugins/automem
AutoMem is a graph-vector memory service that gives AI assistants durable, relational memory:
CortexReach/memory-lancedb-pro
Enhanced LanceDB memory plugin for OpenClaw — Hybrid Retrieval (Vector + BM25), Cross-Encoder...
CaviraOSS/OpenMemory
Local persistent memory store for LLM applications including claude desktop, github copilot,...
verygoodplugins/mcp-automem
AutoMem is a graph-vector memory service that gives AI assistants durable, relational memory: