Pogud/MegaQwen
🚀 Achieve faster Qwen3-0.6B inference with the MegaQwen CUDA megakernel, delivering 531 tok/s decode on RTX 3090—3.9x faster than HuggingFace.
Stars
—
Forks
—
Language
Cuda
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Pogud/MegaQwen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by...
LLM-Red-Team/qwen-free-api
🚀...
willbnu/Qwen-3.5-16G-Vram-Local
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with...
Architect2040/metalQwen3
💻 Implement Qwen3 transformer model on macOS using Metal GPU for accelerated, efficient...