thansen0/fastllm.cpp
A low latency, fault tolerant API for accessing LLM's written in C++ using llama.cpp.
No commits in the last 6 months.
Stars
11
Forks
—
Language
C++
License
Unlicense
Category
Last pushed
Jun 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thansen0/fastllm.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
beehive-lab/GPULlama3.java
GPU-accelerated Llama3.java inference in pure Java using TornadoVM.
gitkaz/mlx_gguf_server
This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously...
srgtuszy/llama-cpp-swift
Swift bindings for llama-cpp library
RhinoDevel/mt_llm
Pure C wrapper library to use llama.cpp with Linux and Windows as simple as possible.
JackZeng0208/llama.cpp-android-tutorial
llama.cpp tutorial on Android phone