llmware and llm-kelt

These are complements: llmware provides the unified RAG and fine-tuning framework, while llm-kelt specializes in the prerequisite context management (facts storage and feedback collection) that feeds into llmware's fine-tuning pipeline.

llmware
84
Verified
llm-kelt
22
Experimental
Maintenance 17/25
Adoption 17/25
Maturity 25/25
Community 25/25
Maintenance 13/25
Adoption 0/25
Maturity 9/25
Community 0/25
Stars: 14,864
Forks: 2,964
Downloads: 1,177
Commits (30d): 12
Language: Python
License: Apache-2.0
Stars:
Forks:
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About llmware

llmware-ai/llmware

Unified framework for building enterprise RAG pipelines with small, specialized models

Brings together prepackaged quantized models (50+ specialized for RAG tasks like extraction, classification, and summarization) and a modular RAG pipeline with multi-format document parsing, vector embedding with multiple backends (Chromadb, Milvus), and hybrid query capabilities (text, semantic, metadata filters). The unified ModelCatalog interface abstracts over diverse inference engines—GGUF, OpenVINO, ONNX-Runtime, HuggingFace—enabling the same code to run on-device across CPUs, GPUs, and NPUs on Windows, Mac, and Linux. Prompt objects orchestrate end-to-end knowledge retrieval and generation, automatically batching sources to fit model context windows while tracking provenance for fact-checking against source materials.

About llm-kelt

llm-works/llm-kelt

Framework for collecting and managing LLM context: facts storage, feedback collection, RAG retrieval, and LoRA fine-tuning

Scores updated daily from GitHub, PyPI, and npm data. How scores work