asprenger/ray_vllm_inference
A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.
39
/ 100
Emerging
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
9 / 25
Maturity
16 / 25
Community
14 / 25
Stars
78
Forks
11
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/asprenger/ray_vllm_inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
PaddlePaddle/FastDeploy
High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle
76
mlc-ai/mlc-llm
Universal LLM Deployment Engine with ML Compilation
65
ServerlessLLM/ServerlessLLM
Serverless LLM Serving for Everyone.
60
skyzh/tiny-llm
A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny...
57
AXERA-TECH/ax-llm
Explore LLM model deployment based on AXera's AI chips
56