chonkie and chonkify

These are **competitors**: both perform document chunking for RAG pipelines, with Chonkie offering a mature, production-ready ingestion library while Chonkify focuses specifically on extractive compression as an alternative approach to information retention.

chonkie
83
Verified
chonkify
39
Emerging
Maintenance 25/25
Adoption 15/25
Maturity 25/25
Community 18/25
Maintenance 13/25
Adoption 4/25
Maturity 9/25
Community 13/25
Stars: 3,829
Forks: 256
Downloads:
Commits (30d): 53
Language: Python
License: MIT
Stars: 5
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License:
No risk flags
No Package No Dependents

About chonkie

chonkie-inc/chonkie

🦛 CHONK docs with Chonkie ✨ — The lightweight ingestion library for fast, efficient and robust RAG pipelines

Provides pluggable chunking strategies—recursive, semantic, code-aware, and LLM-based—with composable pipeline workflows that chain multiple chunkers and refineries together. Integrates with 32+ tools across tokenizers (GPT-2, BPE), embeddings (OpenAI, Sentence Transformers), vector databases, and LLMs, while supporting 56 languages out-of-the-box through modular dependency installation.

About chonkify

thom-heinrich/chonkify

Extractive document compression for RAG and agent pipelines. +69% vs LLMLingua, +175% vs LLMLingua2 on information recovery. Compiled wheels, try it out.

Builds document units scored through 768-dimensional embeddings and selects the highest-ranked segments to stay within token budgets while maximizing factual recovery—critical for quantitative research and reasoning traces where exact facts outweigh fluent paraphrasing. Supports multiple embedding backends including Azure OpenAI, OpenAI-compatible APIs, and fully offline local SentenceTransformers, with a CLI and Python API for RAG pipelines and agent memory systems. Ships as compiled extension modules for performance-sensitive workloads across Linux, Windows, and macOS platforms.

Scores updated daily from GitHub, PyPI, and npm data. How scores work