tpoisonooo/llama.onnx
LLaMa/RWKV onnx models, quantization and testcase
Converts LLaMa and RWKV models to ONNX format with mixed-precision quantization support, enabling CPU inference without PyTorch dependencies. Provides pre-converted fp32/fp16 models and standalone demos optimized for resource-constrained devices, including memory pooling to run on 2GB RAM systems. Designed for cross-platform deployment across hybrid hardware (FPGA/NPU/GPU) with validated numerical precision (max error 0.002 vs. PyTorch).
366 stars. No commits in the last 6 months.
Stars
366
Forks
29
Language
Python
License
GPL-3.0
Category
Last pushed
Jul 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tpoisonooo/llama.onnx"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hkproj/pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
4AI/LS-LLaMA
A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning
ayaka14732/llama-2-jax
JAX implementation of the Llama 2 model
harleyszhang/lite_llama
A light llama-like llm inference framework based on the triton kernel.
luchangli03/export_llama_to_onnx
export llama to onnx