kesimeg/turkish-clip
OpenAI's clip model training for Turkish language using pretrained Resnet and DistilBERT
This project helps you understand images using Turkish text descriptions, even if the exact object isn't in the training data. You provide an image and several Turkish text descriptions, and it tells you how well each description matches the image. This is ideal for anyone working with Turkish image content who needs to find or categorize images based on conceptual searches rather than just object tags.
No commits in the last 6 months.
Use this if you need to search, filter, or categorize images based on descriptive Turkish phrases, rather than just pre-defined tags or object labels.
Not ideal if your primary need is for object detection or classification with a fixed set of categories, or if your image content is not in Turkish.
Stars
10
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kesimeg/turkish-clip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jmisilo/clip-gpt-captioning
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.
leaderj1001/CLIP
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)
PathologyFoundation/plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation...
Lahdhirim/CV-image-captioning-clip-gpt2
Image caption generation using a hybrid CLIP-GPT2 architecture. CLIP encodes the image while...