rtp-llm and PowerInfer

rtp-llm
70
Verified
PowerInfer
54
Established
Maintenance 22/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 1,065
Forks: 159
Downloads:
Commits (30d): 163
Language: Cuda
License: Apache-2.0
Stars: 8,808
Forks: 501
Downloads:
Commits (30d): 0
Language: C++
License: MIT
No Package No Dependents
No Package No Dependents

About rtp-llm

alibaba/rtp-llm

RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

This is a high-performance engine for deploying large language models (LLMs) in real-world applications. It takes your trained LLM, potentially with multimodal inputs like images and text, and efficiently generates responses for a large number of users. It is designed for engineers and AI product managers responsible for running LLM-powered services like AI assistants or smart search features at scale.

AI-application-deployment LLM-serving AI-platform-operations conversational-AI enterprise-search

About PowerInfer

Tiiny-AI/PowerInfer

High-speed Large Language Model Serving for Local Deployment

PowerInfer helps you run large AI language models directly on your personal computer using a single consumer-grade graphics card, making them faster and more accessible. It takes a model file and your input, then rapidly generates responses, allowing individuals or small businesses to use powerful AI locally without needing expensive server hardware. This is ideal for researchers, developers, or anyone needing to run LLMs privately and quickly on their own machine.

AI-on-device local-LLM-deployment personal-AI consumer-AI edge-AI

Scores updated daily from GitHub, PyPI, and npm data. How scores work