google-research/bert

TensorFlow code and pre-trained models for BERT

Archived
51
/ 100
Established

Provides 24 compact model variants (2-12 layers, 128-768 hidden units) optimized for resource-constrained environments, validated on GLUE benchmarks with knowledge distillation support. Introduces Whole Word Masking during pre-training to improve token prediction difficulty and features TensorFlow Hub integration for streamlined fine-tuning across downstream NLP tasks like text classification and semantic similarity.

39,918 stars. No commits in the last 6 months.

Archived Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

39,918

Forks

9,711

Language

Python

License

Apache-2.0

Last pushed

Jul 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/google-research/bert"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.