QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Supports multiple quantization schemes (Int4/Int8, GPTQ) and context lengths up to 32K tokens, with chat variants fine-tuned via SFT and RLHF for tool use, code generation, and agent capabilities. Models range from 1.8B to 72B parameters across base and instruction-tuned variants, with Q-LoRA fine-tuning support. Available on Hugging Face and ModelScope with deployment examples via vLLM and FastChat, plus OpenAI-compatible API integration.
20,703 stars.
Stars
20,703
Forks
1,745
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/QwenLM/Qwen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LLM-Red-Team/qwen-free-api
🚀...
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by...
willbnu/Qwen-3.5-16G-Vram-Local
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with...
QwenLM/qwen.cpp
C++ implementation of Qwen-LM
yassa9/qwen600
Static suckless single batch CUDA-only qwen3-0.6B mini inference engine