MemMachine and memv
These are complements: MemMachine provides a general-purpose memory infrastructure layer while memv offers specialized temporal and structured memory capabilities that could be integrated as a memory backend or plugin within MemMachine's extensible architecture.
About MemMachine
MemMachine/MemMachine
Universal memory layer for AI Agents. It provides scalable, extensible, and interoperable memory storage and retrieval to streamline AI agent state management for next-generation autonomous systems.
Supports three distinct memory types—episodic (graph-based conversational history), profile (SQL-stored user facts), and working memory (session context)—enabling agents to maintain persistent state across restarts. Provides client SDKs, RESTful APIs, and a native Model Context Protocol (MCP) server for integration with Claude Desktop and other clients. Integrates natively with LangChain, LangGraph, CrewAI, LlamaIndex, and other AI frameworks, with Neo4j and SQL backends handling different memory persistence needs.
About memv
vstorm-co/memv
Structured, temporal memory for AI agents.
Extracts facts only when the model fails to predict them, reducing noise through prediction error rather than upfront scoring. Implements bi-temporal validity tracking (event time vs. transaction time) with hybrid retrieval combining vector similarity, BM25 text search, and reciprocal rank fusion. Supports SQLite for local development and PostgreSQL with pgvector for production, integrating with PydanticAI, LangGraph, LlamaIndex, CrewAI, and AutoGen.
Scores updated daily from GitHub, PyPI, and npm data. How scores work