llama-cpp-python-wheels and llama-cpp-python-py314-cuda131-wheel

Maintenance 6/25
Adoption 7/25
Maturity 13/25
Community 8/25
Maintenance 10/25
Adoption 0/25
Maturity 11/25
Community 0/25
Stars: 40
Forks: 3
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars:
Forks:
Downloads:
Commits (30d): 0
Language:
License: MIT
No Package No Dependents
No Package No Dependents

About llama-cpp-python-wheels

dougeeai/llama-cpp-python-wheels

Pre-built wheels for llama-cpp-python across platforms and CUDA versions

This project provides pre-built software packages for 'llama-cpp-python,' which is essential for running large language models (LLMs) efficiently on your local computer. It saves you the complex steps of compiling software yourself. You get ready-to-use installation files tailored for your specific NVIDIA GPU, CUDA version, and Python version, enabling you to quickly deploy and experiment with LLMs. This is for AI developers, researchers, or data scientists who want to run powerful LLMs on their own hardware.

AI Development Machine Learning Engineering Large Language Models GPU Acceleration Python Development

About llama-cpp-python-py314-cuda131-wheel

rookiemann/llama-cpp-python-py314-cuda131-wheel

GPU-accelerated llama-cpp-python 0.3.16 wheel for Python 3.14 (CUDA 13.1, Windows)

Scores updated daily from GitHub, PyPI, and npm data. How scores work