clifs and clip-image-search

Both implement CLIP-based search over visual content, but they target different modalities: one searches video frames while the other searches static images, making them **competitors** for the same underlying use case (multimodal retrieval) rather than complements or siblings.

clifs
43
Emerging
clip-image-search
41
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 480
Forks: 52
Downloads:
Commits (30d): 0
Language: JavaScript
License: Apache-2.0
Stars: 264
Forks: 25
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About clifs

johanmodin/clifs

Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP

Extracts frame-level features from videos using CLIP's image encoder and matches them against text queries processed through CLIP's text encoder, ranking results by cosine similarity above a configurable threshold. The system pre-encodes all video frames during indexing for fast retrieval, with a Django web server providing the search interface. Supports GPU acceleration via Docker Compose and handles diverse queries including object detection and OCR tasks without fine-tuning.

About clip-image-search

kingyiusuen/clip-image-search

Search images with a text or image query, using Open AI's pretrained CLIP model.

Scores updated daily from GitHub, PyPI, and npm data. How scores work