caiomadeira/llama2-psp
Llama 2 inference in C on the PlayStation Portable (PSP).
This project lets you run a compact large language model (LLM) on your PlayStation Portable (PSP) device. You input text prompts directly on the PSP, and the device generates short story responses. This is designed for retro gaming enthusiasts, hobbyists, or anyone curious about pushing the boundaries of classic portable hardware with modern AI.
No commits in the last 6 months.
Use this if you want to experiment with running a basic AI text generator on a PlayStation Portable for novelty or educational purposes.
Not ideal if you need a high-performance, feature-rich, or general-purpose AI text generation tool.
Stars
20
Forks
—
Language
C++
License
—
Category
Last pushed
Aug 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/caiomadeira/llama2-psp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
srgtuszy/llama-cpp-swift
Swift bindings for llama-cpp library
JackZeng0208/llama.cpp-android-tutorial
llama.cpp tutorial on Android phone
awinml/llama-cpp-python-bindings
Run fast LLM Inference using Llama.cpp in Python