lonePatient/Bert-Multi-Label-Text-Classification
This repo contains a PyTorch implementation of a pretrained BERT model for multi-label text classification.
Supports both BERT and XLNet transformer architectures with configurable fine-tuning strategies, including learning rate scheduling and training monitors for multi-GPU environments. The codebase provides end-to-end pipelines for data preprocessing, model training with per-label AUC evaluation, and inference, leveraging the Hugging Face `transformers` library (v2.5.1) for pretrained model loading and tokenization via WordPiece vocabulary.
924 stars. No commits in the last 6 months.
Stars
924
Forks
208
Language
Python
License
MIT
Category
Last pushed
Apr 18, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lonePatient/Bert-Multi-Label-Text-Classification"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
mim-solutions/bert_for_longer_texts
BERT classification model for processing texts longer than 512 tokens. Text is first divided...
OctoberChang/X-Transformer
X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification
QData/LaMP
ECML 2019: Graph Neural Networks for Multi-Label Classification
illiterate/BertClassifier
基于PyTorch的BERT中文文本分类模型(BERT Chinese text classification model implemented by PyTorch)
GT4SD/zero-shot-bert-adapters
Implementation of Z-BERT-A: a zero-shot pipeline for unknown intent detection.