eleiton/ollama-intel-arc
Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and interaction with Large Language Models (LLM).
Leverages Intel Extension for PyTorch (IPEX) and IPEX-LLM to optimize all components—Ollama, ComfyUI, SD.Next, and Whisper—for Intel Arc GPU acceleration on Linux. Deployed as modular Docker containers via Podman Compose, enabling independent scaling of LLM inference, image generation, and ASR workloads while maintaining a unified Open WebUI frontend for cross-service interaction.
277 stars.
Stars
277
Forks
28
Language
Dockerfile
License
Apache-2.0
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/eleiton/ollama-intel-arc"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LykosAI/StabilityMatrix
Multi-Platform Package Manager for Stable Diffusion
AbdBarho/stable-diffusion-webui-docker
Easy Docker setup for Stable Diffusion with user-friendly UI
ashleykleynhans/stable-diffusion-docker
Docker image for Stable Diffusion WebUI with ControlNet, After Detailer, Dreambooth, Deforum and...
runpod-workers/worker-a1111
Automatic1111 serverless worker.
mrhan1993/Fooocus-API
FastAPI powered API for Fooocus