withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
1,942 stars and 4,219,393 monthly downloads. Used by 5 other packages. Actively maintained with 3 commits in the last 30 days. Available on npm.
Stars
1,942
Forks
176
Language
TypeScript
License
MIT
Category
Last pushed
Mar 12, 2026
Monthly downloads
4,219,393
Commits (30d)
3
Dependencies
28
Reverse dependents
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/withcatai/node-llama-cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
bentoml/OpenLLM
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)