mspronesti/llm.sycl
llm.c, but in SYCL/Intel oneAPI!
This project helps machine learning engineers and researchers accelerate the training of large language models. It takes existing model architectures, specifically GPT-2, and enables their execution on SYCL-compatible hardware, including Intel, NVIDIA, and AMD GPUs. The output is a more efficient training process for deep learning models.
No commits in the last 6 months.
Use this if you need to train large language models like GPT-2 more efficiently across different GPU architectures, particularly those supported by SYCL/Intel oneAPI.
Not ideal if you are looking for a high-level API for general machine learning tasks or if you are not comfortable with command-line compilation and execution of deep learning kernels.
Stars
8
Forks
—
Language
C++
License
MIT
Category
Last pushed
Aug 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mspronesti/llm.sycl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Blaizzy/mlx-vlm
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac...
b4rtaz/distributed-llama
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM...
armbues/SiLLM
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple...
microsoft/batch-inference
Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.
armbues/SiLLM-examples
Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on...