miguefuentes1985/vllm-qwen3.5-nvfp4-5090
Run Qwen3.5-35B MoE model on RTX 5090 with vLLM using NVFP4 quantization for fast, efficient text generation and extended context length support.
Stars
—
Forks
—
Language
Jinja
License
—
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/miguefuentes1985/vllm-qwen3.5-nvfp4-5090"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
LLM-Red-Team/qwen-free-api
🚀...
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by...
willbnu/Qwen-3.5-16G-Vram-Local
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with...
yassa9/qwen600
Static suckless single batch CUDA-only qwen3-0.6B mini inference engine