rexdivakar/HippocampAI
HippocampAI — Autonomous Memory Engine for LLM Agents
Implements a hybrid retrieval architecture combining vector search, BM25 lexical matching, and knowledge graph reasoning with real-time entity extraction on every store operation. Supports both lightweight core library deployment and multi-tenant SaaS architecture with Celery background processing, Redis caching (50-100x speedup), and integrations for OpenAI, Anthropic, Groq, and local LLM providers. Features advanced capabilities including temporal queries, memory consolidation via sleep-phase architecture, graph-aware RRF fusion retrieval, and procedural memory for self-optimizing prompts.
Available on PyPI.
Stars
61
Forks
7
Language
Python
License
—
Category
Last pushed
Feb 13, 2026
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/rexdivakar/HippocampAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
kayba-ai/agentic-context-engine
🧠 Make your agents learn from experience. Now available as a hosted solution at kayba.ai
MemMachine/MemMachine
Universal memory layer for AI Agents. It provides scalable, extensible, and interoperable memory...
knowns-dev/knowns
The memory layer for AI-native development — giving AI persistent understanding of your software...
Dicklesworthstone/cass_memory_system
Procedural memory for AI coding agents: transforms scattered session history into persistent,...
MehulG/memX
A real-time shared memory layer for multi-agent LLM systems.