KoBERT and KoBERT-Transformers

The monologg version is a community-maintained port that adapts the original SKTBrain KoBERT model to work with Hugging Face's Transformers library, making it a complement rather than a replacement.

KoBERT
53
Established
KoBERT-Transformers
47
Emerging
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 1,407
Forks: 380
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 212
Forks: 45
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Archived Stale 6m No Package No Dependents

About KoBERT

SKTBrain/KoBERT

Korean BERT pre-trained cased (KoBERT)

Pretrained on 5M Korean Wikipedia sentences with SentencePiece tokenization, achieving a compact 8,002-token vocabulary (92M parameters vs. 110M for multilingual BERT). Supports PyTorch, ONNX, and MXNet-Gluon frameworks with ready-to-use model loading APIs. Demonstrates superior Korean NLP performance on sentiment analysis and named entity recognition tasks compared to Google's multilingual BERT baseline.

About KoBERT-Transformers

monologg/KoBERT-Transformers

KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)

Scores updated daily from GitHub, PyPI, and npm data. How scores work