raptor and RAPTOR
These are unrelated projects that happen to share the same acronym: the first is a research implementation of a tree-based retrieval augmentation technique for LLMs, while the second is a media analysis and knowledge extraction platform, making them distinct solutions addressing different problems in the RAG pipeline.
About raptor
parthsarthi03/raptor
The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
Builds hierarchical tree structures through recursive summarization of document chunks, enabling multi-level retrieval that captures both fine-grained and abstract information. Designed with pluggable abstractions for summarization, QA, and embedding models, allowing integration of custom LLMs (Llama, Mistral, Gemma) and embedding backends (SBERT) beyond the default OpenAI implementation. Supports persisting and reloading constructed trees for efficient reuse across queries.
About RAPTOR
DHT-AI-Studio/RAPTOR
RAPTOR (Rapid AI-Powered Text and Object Recognition) is an AI-native Content Insight Engine that transforms passive media storage into an intelligent knowledge platform through automated analysis, semantic search, and actionable insights. RAPTOR reducing manual tagging by 85% and making content discovery 10x faster.
Built on a Kubernetes-native architecture with LLM orchestration and vector database integration for multi-modal content analysis (video, audio, images, text), RAPTOR uses a plugin-based processor system enabling flexible integration with multiple language models. The framework exposes RESTful APIs for semantic search, automated metadata generation, and entity recognition, while leveraging Redis clustering for distributed caching and MLflow for model lifecycle management.
Scores updated daily from GitHub, PyPI, and npm data. How scores work