OpenAI-CLIP and simple-clip

OpenAI-CLIP
53
Established
simple-clip
50
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 8/25
Maturity 25/25
Community 17/25
Stars: 720
Forks: 104
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 42
Forks: 8
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
Stale 6m

About OpenAI-CLIP

moein-shariatnia/OpenAI-CLIP

Simple implementation of OpenAI CLIP model in PyTorch.

This project helps researchers and engineers build models that understand both images and text together. It takes a collection of images and their descriptive captions, processing them to create a model that can connect what's seen in a picture with what's said in a sentence. This is useful for anyone working on tasks like searching images using text descriptions, or classifying images based on natural language.

multimodal-AI image-retrieval natural-language-processing computer-vision data-labeling

About simple-clip

filipbasara0/simple-clip

A minimal, but effective implementation of CLIP (Contrastive Language-Image Pretraining) in PyTorch

This project helps machine learning engineers and researchers quickly train powerful models that understand both images and text. You input a large dataset of images paired with their descriptions, and it outputs a trained model capable of linking visual content with natural language. This model can then perform tasks like image classification or advanced visual reasoning without needing specific, task-based training.

computer-vision natural-language-processing zero-shot-learning image-classification model-pretraining

Scores updated daily from GitHub, PyPI, and npm data. How scores work