Queryable and CLIP-Finder2

Both tools are independent implementations of semantic image search on iOS using CLIP-family models (OpenAI's CLIP/Apple's MobileCLIP), making them direct competitors offering similar functionality with different model choices and user interfaces rather than complementary components.

Queryable
49
Emerging
CLIP-Finder2
38
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 23/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 13/25
Stars: 2,924
Forks: 450
Downloads:
Commits (30d): 0
Language: Swift
License: MIT
Stars: 90
Forks: 11
Downloads:
Commits (30d): 0
Language: Swift
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Queryable

mazzzystar/Queryable

Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.

Implements dual-encoder architecture with separate image and text encoders exported as Core ML models, enabling semantic similarity matching through vector comparison rather than keyword matching. Processes photo libraries entirely on-device using Apple's optimized MobileCLIP model, with pre-computed image embeddings cached locally to minimize latency on repeated queries. Targets iOS via Xcode and Core ML framework, with community ports available for Android and macOS.

About CLIP-Finder2

fguzman82/CLIP-Finder2

CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on Apple's MobileCLIP-S0 architecture, it ensures optimal performance and accurate media retrieval.

Scores updated daily from GitHub, PyPI, and npm data. How scores work