clifs and clip-image-search
Both implement CLIP-based search over visual content, but they target different modalities: one searches video frames while the other searches static images, making them **competitors** for the same underlying use case (multimodal retrieval) rather than complements or siblings.
About clifs
johanmodin/clifs
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
Extracts frame-level features from videos using CLIP's image encoder and matches them against text queries processed through CLIP's text encoder, ranking results by cosine similarity above a configurable threshold. The system pre-encodes all video frames during indexing for fast retrieval, with a Django web server providing the search interface. Supports GPU acceleration via Docker Compose and handles diverse queries including object detection and OCR tasks without fine-tuning.
About clip-image-search
kingyiusuen/clip-image-search
Search images with a text or image query, using Open AI's pretrained CLIP model.
Scores updated daily from GitHub, PyPI, and npm data. How scores work