node-llama-cpp and llama-swap
These are complements: node-llama-cpp provides the local LLM inference engine for Node.js applications, while llama-swap manages dynamic model switching across compatible servers, allowing you to use them together to run and swap between multiple models locally.
Maintenance
16/25
Adoption
25/25
Maturity
25/25
Community
20/25
Maintenance
20/25
Adoption
10/25
Maturity
16/25
Community
19/25
Stars: 1,942
Forks: 176
Downloads: 4,219,393
Commits (30d): 3
Language: TypeScript
License: MIT
Stars: 2,775
Forks: 205
Downloads: —
Commits (30d): 17
Language: Go
License: MIT
No risk flags
No Package
No Dependents
About node-llama-cpp
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
About llama-swap
mostlygeek/llama-swap
Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work