PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
34
/ 100
Emerging
373 stars. No commits in the last 6 months.
No License
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
10 / 25
Maturity
8 / 25
Community
16 / 25
Stars
373
Forks
37
Language
Python
License
—
Category
Last pushed
Sep 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PathologyFoundation/plip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.