mlc-llm and llm-deploy

These tools appear to be **competitors**, as both aim to provide solutions for deploying and serving LLMs, with MLC LLM offering a universal engine with ML compilation for broader deployment and `llm-deploy` focusing on specific inference backends like TensorRT-LLM and vLLM.

mlc-llm
65
Established
llm-deploy
24
Experimental
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 13/25
Adoption 6/25
Maturity 1/25
Community 4/25
Stars: 22,185
Forks: 1,960
Downloads:
Commits (30d): 15
Language: Python
License: Apache-2.0
Stars: 22
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No License No Package No Dependents

About mlc-llm

mlc-ai/mlc-llm

Universal LLM Deployment Engine with ML Compilation

Compiles LLMs to optimized machine code via TVM's ML compilation framework, then executes them through MLCEngine—a unified inference runtime supporting diverse backends (CUDA, ROCm, Metal, WebGPU, OpenCL) across GPUs, mobile devices, and browsers. Exposes OpenAI-compatible REST and language-specific APIs (Python, JavaScript, iOS, Android) from the same compiled engine, enabling model-agnostic deployment without framework lock-in.

About llm-deploy

lix19937/llm-deploy

AI Infra LLM infer/ tensorrt-llm/ vllm

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work