taabishhh/LLM_Preprocessing
This project implements a Byte Pair Encoding (BPE) tokenization approach along with a Word2Vec model to generate word embeddings from a text corpus. The implementation leverages Apache Hadoop for distributed processing and includes evaluation metrics for optimal dimensionality of embeddings.
No commits in the last 6 months.
Stars
1
Forks
—
Language
Scala
License
Apache-2.0
Category
Last pushed
Aug 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/taabishhh/LLM_Preprocessing"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
eliben/go-sentencepiece
Go implementation of the SentencePiece tokenizer
sefineh-ai/Amharic-Tokenizer
Syllable-aware BPE tokenizer for the Amharic language (አማርኛ) – fast, accurate, trainable.
mdabir1203/BPE_Tokenizer_Visualizer
A Visualizer to check how BPE Tokenizer in an LLM Works
U4RASD/r-bpe
R-BPE: Improving BPE-Tokenizers with Token Reuse
jmaczan/bpe-tokenizer
Byte-Pair Encoding tokenizer for training large language models on huge datasets