Log Fire

Guarding the Privacy of

Guarding the Privacy of L
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via iDP Verification

arXiv:2502.16519v1 Announce Type: new
Abstract: Neural networks are susceptible to privacy attacks that can extract private information of the training set. To cope, several training algorithms guarantee differential privacy (DP) by adding noise to their computation. However, DP requires to add noi…

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *