eleiton/ollama-intel-arc

Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and interaction with Large Language Models (LLM).

47
/ 100
Emerging

Leverages Intel Extension for PyTorch (IPEX) and IPEX-LLM to optimize all components—Ollama, ComfyUI, SD.Next, and Whisper—for Intel Arc GPU acceleration on Linux. Deployed as modular Docker containers via Podman Compose, enabling independent scaling of LLM inference, image generation, and ASR workloads while maintaining a unified Open WebUI frontend for cross-service interaction.

277 stars.

No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 15 / 25

How are scores calculated?

Stars

277

Forks

28

Language

Dockerfile

License

Apache-2.0

Last pushed

Mar 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/eleiton/ollama-intel-arc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.