google-research/electra

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

48
/ 100
Emerging

Employs a replaced token detection objective using a generator-discriminator setup inspired by GANs, where a smaller transformer generates plausible token replacements and the main model learns to identify fakes—enabling efficient pre-training on single GPUs while achieving competitive downstream performance. Supports fine-tuning on classification (GLUE), QA (SQuAD), and sequence tagging tasks, with an alternative "Electric" variant that frames training as energy-based cloze modeling and enables pseudo-likelihood scoring for text re-ranking. Built on TensorFlow 1.15 with tfrecord-based pre-training pipelines and includes pre-trained checkpoints (Small/Base/Large) alongside code for continued pre-training from released weights.

2,371 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

2,371

Forks

349

Language

Python

License

Apache-2.0

Last pushed

Mar 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/google-research/electra"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.