zealscott/MIA
Source code for Cascading and Proxy Membership Inference Attacks. NDSS 2026.
This project helps evaluate the privacy risks of machine learning models by determining if specific data points were used in their training. It takes a deployed machine learning model and some data as input, then outputs a judgment on whether that data was part of the model's training set. This is primarily for privacy researchers or security auditors who need to assess model vulnerability to membership inference attacks.
No commits in the last 6 months.
Use this if you need to test the privacy robustness of a machine learning model against advanced membership inference attacks without requiring access to its original training data distribution.
Not ideal if you are looking for a tool to enhance model privacy or anonymize datasets, as this focuses solely on identifying privacy vulnerabilities.
Stars
10
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zealscott/MIA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...