TY - JOUR
T1 - A Blessing of Dimensionality in Membership Inference through Regularization
AU - Tan, Jasper
AU - LeJeune, Daniel
AU - Mason, Blake
AU - Javadi, Hamid
AU - Baraniuk, Richard G.
N1 - Funding Information:
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
Publisher Copyright:
Copyright © 2023 by the author(s)
PY - 2023
Y1 - 2023
N2 - Is overparameterization a privacy liability? In this work, we study the effect that the number of parameters has on a classifier's vulnerability to membership inference attacks. We first demonstrate how the number of parameters of a model can induce a privacy-utility trade-off: increasing the number of parameters generally improves generalization performance at the expense of lower privacy. However, remarkably, we then show that if coupled with proper regularization, increasing the number of parameters of a model can actually simultaneously increase both its privacy and performance, thereby eliminating the privacy-utility trade-off. Theoretically, we demonstrate this curious phenomenon for logistic regression with ridge regularization in a bi-level feature ensemble setting. Pursuant to our theoretical exploration, we develop a novel leave-one-out analysis tool to precisely characterize the vulnerability of a linear classifier to the optimal membership inference attack. We empirically exhibit this “blessing of dimensionality” for neural networks on a variety of tasks using early stopping as the regularizer.
AB - Is overparameterization a privacy liability? In this work, we study the effect that the number of parameters has on a classifier's vulnerability to membership inference attacks. We first demonstrate how the number of parameters of a model can induce a privacy-utility trade-off: increasing the number of parameters generally improves generalization performance at the expense of lower privacy. However, remarkably, we then show that if coupled with proper regularization, increasing the number of parameters of a model can actually simultaneously increase both its privacy and performance, thereby eliminating the privacy-utility trade-off. Theoretically, we demonstrate this curious phenomenon for logistic regression with ridge regularization in a bi-level feature ensemble setting. Pursuant to our theoretical exploration, we develop a novel leave-one-out analysis tool to precisely characterize the vulnerability of a linear classifier to the optimal membership inference attack. We empirically exhibit this “blessing of dimensionality” for neural networks on a variety of tasks using early stopping as the regularizer.
UR - http://www.scopus.com/inward/record.url?scp=85164374784&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164374784&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85164374784
SN - 2640-3498
VL - 206
SP - 10968
EP - 10993
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023
Y2 - 25 April 2023 through 27 April 2023
ER -