marcominerva/KernelMemoryService
A lightweight implementation of Kernel Memory as a Service
Supports conversational RAG by tracking chat history per ConversationId and automatically reformulating follow-up questions before embedding, enabling context-aware answers without repeated clarification. Defaults to Azure OpenAI for embeddings/generation with file system storage, but supports pluggable backends across the Kernel Memory ecosystem for flexible deployment scenarios.
No commits in the last 6 months.
Stars
42
Forks
5
Language
C#
License
MIT
Category
Last pushed
Jun 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/marcominerva/KernelMemoryService"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
RichmondAlake/memorizz
MemoRizz: A Python library serving as a memory layer for AI applications. Leverages popular...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...