hogeheer499-commits/strix-halo-guide
57 t/s LLM inference on AMD Ryzen AI MAX+ 395 — the complete optimization guide for Strix Halo
Stars
3
Forks
—
Language
—
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hogeheer499-commits/strix-halo-guide"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning
mlcommons/training
Reference implementations of MLPerf® training benchmarks