eastriverlee/LLM.swift
LLM.swift is a simple and readable library that allows you to interact with large language models locally with ease for macOS, iOS, watchOS, tvOS, and visionOS.
This is a tool for Apple developers to build applications that run large language models (LLMs) directly on their users' Apple devices (macOS, iOS, watchOS, tvOS, and visionOS). It helps by taking a local LLM file or a model from Hugging Face and integrating it into an app, then providing user input to generate AI responses. This is for app developers who want to add offline AI capabilities to their Apple applications.
829 stars.
Use this if you are an Apple developer looking to embed large language models directly into your macOS, iOS, or other Apple OS apps, allowing them to run AI inference locally.
Not ideal if you are not an Apple developer or if you want to integrate with cloud-based LLM APIs rather than running models locally on device.
Stars
829
Forks
111
Language
C++
License
MIT
Category
Last pushed
Dec 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eastriverlee/LLM.swift"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
Scottcjn/llama-cpp-power8
AltiVec/VSX optimized llama.cpp for IBM POWER8