uncertainty-baselines and equine
About uncertainty-baselines
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
This project offers standardized, high-quality implementations of methods for assessing and improving the reliability of machine learning models. It takes raw training data and model configurations, and outputs performance metrics like accuracy, calibration error, and negative log-likelihood. This tool is designed for machine learning researchers and practitioners who need to evaluate model robustness and uncertainty in a consistent way.
About equine
mit-ll-responsible-ai/equine
Establishing Quantified Uncertainty in Neural Networks
When working with machine learning models that categorize or label data, it's crucial to understand not just what the model predicts, but also how confident it is and if the data even fits within what it was trained on. This tool takes your existing deep neural network and gives you enhanced predictions, including calibrated probabilities for each label and a score indicating if the input data is truly similar to the data the model learned from. Data scientists and machine learning engineers who need to build more trustworthy and transparent AI systems will find this invaluable.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work