TY - JOUR
T1 - The Implicit Regularization of Ordinary Least Squares Ensembles
AU - LeJeune, Daniel
AU - Javadi, Hamid
AU - Baraniuk, Richard G.
N1 - Funding Information:
We would like to thank Ryan Tibshirani for helpful discussions and the anonymous reviewers for their helpful feedback. This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571 and N00014-17-1-2551; AFOSR grant FA9550-18-1-0478; DARPA grant G001534-7500; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
Publisher Copyright:
Copyright © 2020 by the author(s)
PY - 2020
Y1 - 2020
N2 - Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood. We study the case of an ensemble of linear predictors, where each individual predictor is fit using ordinary least squares on a random submatrix of the data matrix. We show that, under standard Gaussianity assumptions, when the number of features selected for each predictor is optimally tuned, the asymptotic risk of a large ensemble is equal to the asymptotic ridge regression risk, which is known to be optimal among linear predictors in this setting. In addition to eliciting this implicit regularization that results from subsampling, we also connect this ensemble to the dropout technique used in training deep (neural) networks, another strategy that has been shown to have a ridge-like regularizing effect.
AB - Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood. We study the case of an ensemble of linear predictors, where each individual predictor is fit using ordinary least squares on a random submatrix of the data matrix. We show that, under standard Gaussianity assumptions, when the number of features selected for each predictor is optimally tuned, the asymptotic risk of a large ensemble is equal to the asymptotic ridge regression risk, which is known to be optimal among linear predictors in this setting. In addition to eliciting this implicit regularization that results from subsampling, we also connect this ensemble to the dropout technique used in training deep (neural) networks, another strategy that has been shown to have a ridge-like regularizing effect.
UR - http://www.scopus.com/inward/record.url?scp=85161921784&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161921784&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85161921784
SN - 2640-3498
VL - 108
SP - 3525
EP - 3535
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020
Y2 - 26 August 2020 through 28 August 2020
ER -