Property Inference Attacks on Deep Learning Models
Property inference attacks allow adversaries to infer statistical properties of datasets, given access to models trained on them. Although several examples of property inference have been presented in the literature, no formal definitions of these attacks have been proposed. In addition to the lack of formal definitions, these attacks start with impractical assumptions like training hundreds of models for meta-classifier-based approaches. These requirements make it impossible to scale to deep neural networks. We start with formalizing property inference attacks with a general framework, along with working on a notion of the usefulness of such properties. The goal of this work is to understand the potential and limitations of property inference attacks on deep neural networks.
- Vicente Ordóñez-Román (Chair)
- David Evans (Advisor)
- Yuan Tian
- Tom Fletcher