prosperitypirate/codexfi
Persistent memory for OpenCode AI agents. Embedded LanceDB + Voyage AI embeddings, Bun plugin with CLI and web dashboard.
Automatically extracts and stores typed facts (architecture, errors, preferences, progress) after each assistant turn, then injects semantically relevant memories back into the system prompt via vector search—keeping all data locally in LanceDB while supporting pluggable extraction providers (Anthropic, xAI, Google). Includes built-in deduplication, contradiction handling, privacy filters, and a web dashboard for memory browsing and cost tracking.
Available on npm.
Stars
6
Forks
1
Language
TypeScript
License
MIT
Category
Last pushed
Mar 09, 2026
Monthly downloads
622
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/prosperitypirate/codexfi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
ascottbell/maasv
Memory Architecture as a Service — cognition layer for AI assistants. 3-signal retrieval,...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...