llmware and End-to-End-LLM-Projects
These are complements: llmware provides a production-ready framework for building RAG pipelines, while End-to-End-LLM-Projects serves as educational reference implementations demonstrating similar RAG, function-calling, and agent patterns that practitioners could integrate into a llmware-based system.
About llmware
llmware-ai/llmware
Unified framework for building enterprise RAG pipelines with small, specialized models
Brings together prepackaged quantized models (50+ specialized for RAG tasks like extraction, classification, and summarization) and a modular RAG pipeline with multi-format document parsing, vector embedding with multiple backends (Chromadb, Milvus), and hybrid query capabilities (text, semantic, metadata filters). The unified ModelCatalog interface abstracts over diverse inference engines—GGUF, OpenVINO, ONNX-Runtime, HuggingFace—enabling the same code to run on-device across CPUs, GPUs, and NPUs on Windows, Mac, and Linux. Prompt objects orchestrate end-to-end knowledge retrieval and generation, automatically batching sources to fit model context windows while tracking provenance for fact-checking against source materials.
About End-to-End-LLM-Projects
pd2871/End-to-End-LLM-Projects
This repo contains code related to development of LLM based projects with Langchain and LLamaIndex. It uses RAG, Function calling, agents and tools as of now for interaction with data.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work