divagr18/memlayer

Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.

65
/ 100
Established

Implements a hybrid vector + knowledge graph architecture using ChromaDB and NetworkX to enable fast semantic search combined with entity relationship traversal. Supports three operation modes (LOCAL/ONLINE/LIGHTWEIGHT) that trade off accuracy, startup time, and cost by varying the salience filtering approach—from ML-based sentence transformers to LLM embeddings to lightweight keyword extraction. Works across all major LLM providers (OpenAI, Claude, Gemini, Ollama, LMStudio) with intelligent multi-tier search (Fast/Balanced/Deep) that automatically adjusts retrieval depth based on query complexity.

261 stars and 875 monthly downloads. Available on PyPI.

Maintenance 10 / 25
Adoption 17 / 25
Maturity 22 / 25
Community 16 / 25

How are scores calculated?

Stars

261

Forks

32

Language

Python

License

MIT

Last pushed

Feb 02, 2026

Monthly downloads

875

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/divagr18/memlayer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.