Queryable and CLIP-Finder2
Both tools are independent implementations of semantic image search on iOS using CLIP-family models (OpenAI's CLIP/Apple's MobileCLIP), making them direct competitors offering similar functionality with different model choices and user interfaces rather than complementary components.
About Queryable
mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
Implements dual-encoder architecture with separate image and text encoders exported as Core ML models, enabling semantic similarity matching through vector comparison rather than keyword matching. Processes photo libraries entirely on-device using Apple's optimized MobileCLIP model, with pre-computed image embeddings cached locally to minimize latency on repeated queries. Targets iOS via Xcode and Core ML framework, with community ports available for Android and macOS.
About CLIP-Finder2
fguzman82/CLIP-Finder2
CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on Apple's MobileCLIP-S0 architecture, it ensures optimal performance and accurate media retrieval.
Scores updated daily from GitHub, PyPI, and npm data. How scores work