Clip Vision Language Transformer Models
There are 3 clip vision language models tracked. The highest-rated is jmisilo/clip-gpt-captioning at 39/100 with 118 stars.
Get all 3 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=clip-vision-language&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
jmisilo/clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2. |
|
Emerging |
| 2 |
leaderj1001/CLIP
CLIP: Connecting Text and Image (Learning Transferable Visual Models From... |
|
Emerging |
| 3 |
PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and... |
|
Experimental |