Prasukj7-arch/PTQ_QAT_Model_Training
ResNet18 model optimization for CIFAR-10 using Post-Training and Quantization-Aware Training (PTQ/QAT) to reduce size and improve inference.
No commits in the last 6 months.
Stars
1
Forks
—
Language
TypeScript
License
—
Category
Last pushed
Oct 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Prasukj7-arch/PTQ_QAT_Model_Training"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...