rookiemann/llama-cpp-python-py314-cuda131-wheel
GPU-accelerated llama-cpp-python 0.3.16 wheel for Python 3.14 (CUDA 13.1, Windows)
21
/ 100
Experimental
No Package
No Dependents
Maintenance
10 / 25
Adoption
0 / 25
Maturity
11 / 25
Community
0 / 25
Stars
—
Forks
—
Language
—
License
MIT
Category
Last pushed
Feb 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rookiemann/llama-cpp-python-py314-cuda131-wheel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
51
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
50
srgtuszy/llama-cpp-swift
Swift bindings for llama-cpp library
44
JackZeng0208/llama.cpp-android-tutorial
llama.cpp tutorial on Android phone
40
awinml/llama-cpp-python-bindings
Run fast LLM Inference using Llama.cpp in Python
37