aitor-martinez-seras/OoD_on_SNNs

Explainable Out-of-Distribution Detection Approach for Spiking Neural Networks (Code for "A Novel Out-of-Distribution Detection Approach for Spiking Neural Networks: Design, Fusion, Performance Evaluation and Explainability")

12
/ 100
Experimental

This project helps machine learning practitioners evaluate how well their Spiking Neural Networks (SNNs) can detect data that is 'out-of-distribution' (OoD). It takes trained SNN models and various image datasets (both in-distribution and OoD) as input, and provides performance metrics and visual explanations (attribution maps) showing why a sample was flagged as OoD. The primary users are researchers and engineers working with SNNs who need to ensure their models are reliable when encountering unexpected data.

No commits in the last 6 months.

Use this if you are developing or deploying Spiking Neural Networks and need to rigorously test their ability to identify data that significantly differs from their training examples.

Not ideal if you are working with traditional Artificial Neural Networks (ANNs) or need a general-purpose OoD detection library that isn't specifically tailored for SNNs.

Spiking Neural Networks Out-of-Distribution Detection Model Reliability Explainable AI Machine Learning Research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Jupyter Notebook

License

Last pushed

Sep 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aitor-martinez-seras/OoD_on_SNNs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.