llcuda/llcuda

CUDA 12-first backend inference for Unsloth on Kaggle — Optimized for small GGUF models (1B-5B) on dual Tesla T4 GPUs (15GB each, SM 7.5)

35
/ 100
Emerging

This project helps data scientists, machine learning engineers, and researchers efficiently run small to medium-sized AI language models (1B-5B parameters) on Kaggle's dual Tesla T4 GPU environments. It takes a pre-trained or fine-tuned language model in GGUF format and provides a fast inference engine, outputting generated text responses. You would use this if you're working with AI models and need optimized performance and resource allocation on Kaggle.

Use this if you need to run small GGUF language models (1B-5B parameters) quickly and efficiently on Kaggle's dual Tesla T4 GPUs, especially if you also want to use one GPU for visualization while the other handles model inference.

Not ideal if you are working with very large language models (e.g., >70B parameters) or if you are not operating within a Kaggle dual Tesla T4 GPU environment.

AI-inference Kaggle-competitions machine-learning-operations natural-language-processing GPU-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/llcuda/llcuda"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.