CLUEbenchmark/CLUECorpus2020

Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料

53
/ 100
Established

Extracted from Common Crawl and cleaned to high quality standards, the corpus includes a specialized simplified Chinese vocabulary (8,021 tokens) optimized for NLP tasks, reducing token overhead compared to Google's multilingual vocab. Pre-formatted for direct use in BERT and masked language model training, experiments demonstrate competitive performance on CLUE benchmark tasks with equivalent or smaller data volumes.

1,002 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,002

Forks

83

Language

License

MIT

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/CLUEbenchmark/CLUECorpus2020"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.