Zzzxkxz/cuda-fp8-ampere
🚀 Accelerate FP8 GEMM tasks on RTX 3090 Ti using lightweight storage and efficient tensor cores for high throughput without native FP8 support.
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Cuda
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Zzzxkxz/cuda-fp8-ampere"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ggml-org/ggml
Tensor library for machine learning
68
onnx/ir-py
Efficient in-memory representation for ONNX, in Python
55
SandAI-org/MagiCompiler
A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.
49
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
46
R-D-BioTech-Alaska/Qelm
Qelm - Quantum Enhanced Language Model
45