mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
Implements dual-encoder architecture with separate image and text encoders exported as Core ML models, enabling semantic similarity matching through vector comparison rather than keyword matching. Processes photo libraries entirely on-device using Apple's optimized MobileCLIP model, with pre-computed image embeddings cached locally to minimize latency on repeated queries. Targets iOS via Xcode and Core ML framework, with community ports available for Android and macOS.
2,924 stars. No commits in the last 6 months.
Stars
2,924
Forks
450
Language
Swift
License
MIT
Category
Last pushed
Jan 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mazzzystar/Queryable"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
unum-cloud/UForm
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts,...
rom1504/clip-retrieval
Easily compute clip embeddings and build a clip retrieval system with them
Ubaida-M-Yusuf/Makimus-AI
AI-powered media search — find images and videos using natural language or visual queries
s-emanuilov/litepali
LitePali is a minimal, efficient implementation of ColPali for image retrieval and indexing,...
HEGOM61ita/OffGallery
Catalogatore AI di immagini fotografiche · Compatibile con Lightroom — Tag automatici, Metadata...