KoBERT and KoBERT-Transformers
The monologg version is a community-maintained port that adapts the original SKTBrain KoBERT model to work with Hugging Face's Transformers library, making it a complement rather than a replacement.
About KoBERT
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
Pretrained on 5M Korean Wikipedia sentences with SentencePiece tokenization, achieving a compact 8,002-token vocabulary (92M parameters vs. 110M for multilingual BERT). Supports PyTorch, ONNX, and MXNet-Gluon frameworks with ready-to-use model loading APIs. Demonstrates superior Korean NLP performance on sentiment analysis and named entity recognition tasks compared to Google's multilingual BERT baseline.
About KoBERT-Transformers
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
Scores updated daily from GitHub, PyPI, and npm data. How scores work