tokenizers and language-tokenizer
These are competitors: Hugging Face's tokenizers library is a production-grade, widely-adopted implementation that handles state-of-the-art tokenization across multiple languages, while language-tokenizer appears to be an alternative approach with similar goals but lacks adoption and maintenance.
About tokenizers
huggingface/tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Implemented in Rust with Python/Node.js/Ruby bindings, it supports BPE, WordPiece, and Unigram tokenization algorithms with integrated normalization that tracks character-level alignment to original text. The library handles full preprocessing pipelines including truncation, padding, and special token injection, enabling both vocabulary training and inference through a unified modular API.
About language-tokenizer
mazebrr/language-tokenizer
🧩 Tokenize text efficiently across multiple languages using our robust library, combining Unicode and NLP techniques for accurate text analysis.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work