johanmodin/clifs
Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP
Extracts frame-level features from videos using CLIP's image encoder and matches them against text queries processed through CLIP's text encoder, ranking results by cosine similarity above a configurable threshold. The system pre-encodes all video frames during indexing for fast retrieval, with a Django web server providing the search interface. Supports GPU acceleration via Docker Compose and handles diverse queries including object detection and OCR tasks without fine-tuning.
480 stars. No commits in the last 6 months.
Stars
480
Forks
52
Language
JavaScript
License
Apache-2.0
Category
Last pushed
Mar 15, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/johanmodin/clifs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
kingyiusuen/clip-image-search
Search images with a text or image query, using Open AI's pretrained CLIP model.
aws-samples/amazon-sagemaker-clip-search
Build a machine learning (ML) powered search engine prototype to retrieve and recommend products...
NTUYWANG103/clip-image-search
This code implements a versatile image search engine leveraging the CLIP model and FAISS,...