mazzzystar/Queryable

Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.

49
/ 100
Emerging

Implements dual-encoder architecture with separate image and text encoders exported as Core ML models, enabling semantic similarity matching through vector comparison rather than keyword matching. Processes photo libraries entirely on-device using Apple's optimized MobileCLIP model, with pre-computed image embeddings cached locally to minimize latency on repeated queries. Targets iOS via Xcode and Core ML framework, with community ports available for Android and macOS.

2,924 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

2,924

Forks

450

Language

Swift

License

MIT

Last pushed

Jan 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mazzzystar/Queryable"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.