williamdevena/Defending-federated-learning-system
Implementation of a client reputation, gradient checking and homomorphic encryption mechanism to defend a federated learning system from data/model poisoning and reverse engineering attacks.
No commits in the last 6 months.
Stars
17
Forks
4
Language
Python
License
—
Category
Last pushed
Jan 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/williamdevena/Defending-federated-learning-system"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tensorflow/privacy
Library for training machine learning models with privacy for training data
meta-pytorch/opacus
Training PyTorch models with differential privacy
tf-encrypted/tf-encrypted
A Framework for Encrypted Machine Learning in TensorFlow
awslabs/fast-differential-privacy
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
sassoftware/dpmm
dpmm: a library for synthetic tabular data generation with rich functionality and end-to-end...