Quasar-Kim/kc-moe
한국어 댓글 데이터셋에 훈련시킨 Pretrained MoE(Mixture-of-Experts) 모델
This project provides a pre-trained language model specifically designed for understanding Korean online comments and informal text. It takes raw Korean text, especially from news comments, and outputs a nuanced understanding of its meaning, even with slang or colloquialisms. This is for data scientists, NLP engineers, or researchers working with user-generated Korean text.
No commits in the last 6 months.
Use this if you need to analyze or process large volumes of Korean online comments or other informal digital text, where standard models trained on formal language might struggle.
Not ideal if your primary task involves highly formal or technical Korean documents, as its strength lies in handling informal language.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 19, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Quasar-Kim/kc-moe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델