Atome-FE/llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Archived867 stars. No commits in the last 6 months.
Stars
867
Forks
65
Language
Rust
License
Apache-2.0
Category
Last pushed
Aug 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Atome-FE/llama-node"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
muxi-ai/onellm
Unified interface for interacting with various LLMs hundreds of models, caching, fallback...
mgonzs13/llama_ros
llama.cpp (GGUF LLMs) and llava.cpp (GGUF VLMs) for ROS 2
docusealco/rllama
Ruby FFI bindings for llama.cpp to run open-source LLMs such as GPT-OSS, Qwen 3, Gemma 3, and...
Rin313/StegLLM
离线的LLM文本隐写程序。Offline LLM text steganography program.