nshkrdotcom/vllm

vLLM - High-throughput, memory-efficient LLM inference engine with PagedAttention, continuous batching, CUDA/HIP optimization, quantization (GPTQ/AWQ/INT4/INT8/FP8), tensor/pipeline parallelism, OpenAI-compatible API, multi-GPU/TPU/Neuron support, prefix caching, and multi-LoRA capabilities

19
/ 100
Experimental
No Package No Dependents
Maintenance 10 / 25
Adoption 0 / 25
Maturity 9 / 25
Community 0 / 25

How are scores calculated?

Stars

Forks

Language

Elixir

License

MIT

Last pushed

Feb 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nshkrdotcom/vllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.