remete618/widemem-ai
Next-gen AI memory layer with importance scoring, temporal decay, hierarchical memory, and YMYL prioritization
Implements a three-tier hierarchical storage (facts → summaries → themes) with automatic LLM-powered conflict resolution that batches contradictions into single API calls, reducing costs and maintaining consistency. Built on SQLite + FAISS locally with pluggable LLM providers (OpenAI, Anthropic, Ollama) and vector stores (Qdrant), plus confidence-aware retrieval modes (fast/balanced/deep) that detect and flag low-confidence answers instead of hallucinating.
Available on PyPI.
Stars
2
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 12, 2026
Monthly downloads
137
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/remete618/widemem-ai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
ascottbell/maasv
Memory Architecture as a Service — cognition layer for AI assistants. 3-signal retrieval,...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...