Yash-1335/qwen600
🚀 Build a fast inference engine for the QWEN3-0.6B model using CUDA, optimizing performance with minimal dependencies for efficient learning and practice.
Stars
—
Forks
1
Language
Cuda
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Yash-1335/qwen600"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
LLM-Red-Team/qwen-free-api
🚀...
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by...
willbnu/Qwen-3.5-16G-Vram-Local
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with...
yassa9/qwen600
Static suckless single batch CUDA-only qwen3-0.6B mini inference engine