chonkie and chonkify
These are **competitors**: both perform document chunking for RAG pipelines, with Chonkie offering a mature, production-ready ingestion library while Chonkify focuses specifically on extractive compression as an alternative approach to information retention.
About chonkie
chonkie-inc/chonkie
🦛 CHONK docs with Chonkie ✨ — The lightweight ingestion library for fast, efficient and robust RAG pipelines
Provides pluggable chunking strategies—recursive, semantic, code-aware, and LLM-based—with composable pipeline workflows that chain multiple chunkers and refineries together. Integrates with 32+ tools across tokenizers (GPT-2, BPE), embeddings (OpenAI, Sentence Transformers), vector databases, and LLMs, while supporting 56 languages out-of-the-box through modular dependency installation.
About chonkify
thom-heinrich/chonkify
Extractive document compression for RAG and agent pipelines. +69% vs LLMLingua, +175% vs LLMLingua2 on information recovery. Compiled wheels, try it out.
Builds document units scored through 768-dimensional embeddings and selects the highest-ranked segments to stay within token budgets while maximizing factual recovery—critical for quantitative research and reasoning traces where exact facts outweigh fluent paraphrasing. Supports multiple embedding backends including Azure OpenAI, OpenAI-compatible APIs, and fully offline local SentenceTransformers, with a CLI and Python API for RAG pipelines and agent memory systems. Ships as compiled extension modules for performance-sensitive workloads across Linux, Windows, and macOS platforms.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work