A Blessing of Dimensionality in Membership Inference through Regularization

Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, Richard G. Baraniuk

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Is overparameterization a privacy liability? In this work, we study the effect that the number of parameters has on a classifier's vulnerability to membership inference attacks. We first demonstrate how the number of parameters of a model can induce a privacy-utility trade-off: increasing the number of parameters generally improves generalization performance at the expense of lower privacy. However, remarkably, we then show that if coupled with proper regularization, increasing the number of parameters of a model can actually simultaneously increase both its privacy and performance, thereby eliminating the privacy-utility trade-off. Theoretically, we demonstrate this curious phenomenon for logistic regression with ridge regularization in a bi-level feature ensemble setting. Pursuant to our theoretical exploration, we develop a novel leave-one-out analysis tool to precisely characterize the vulnerability of a linear classifier to the optimal membership inference attack. We empirically exhibit this “blessing of dimensionality” for neural networks on a variety of tasks using early stopping as the regularizer.

Original languageEnglish (US)
Pages (from-to)10968-10993
Number of pages26
JournalProceedings of Machine Learning Research
Volume206
StatePublished - 2023
Event26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 - Valencia, Spain
Duration: Apr 25 2023Apr 27 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A Blessing of Dimensionality in Membership Inference through Regularization'. Together they form a unique fingerprint.

Cite this