eduzhai > Applied Sciences > Engineering >

Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions

  • Save

... pages left unread,continue reading

Document pages: 35 pages

Abstract: Empirical Risk Minimization (ERM) algorithms are widely used in a variety ofestimation and prediction tasks in signal-processing and machine learningapplications. Despite their popularity, a theory that explains theirstatistical properties in modern regimes where both the number of measurementsand the number of unknown parameters is large is only recently emerging. Inthis paper, we characterize for the first time the fundamental limits on thestatistical accuracy of convex ERM for inference in high-dimensionalgeneralized linear models. For a stylized setting with Gaussian features andproblem dimensions that grow large at a proportional rate, we start with sharpperformance characterizations and then derive tight lower bounds on theestimation and prediction error that hold over a wide class of loss functionsand for any value of the regularization parameter. Our precise analysis hasseveral attributes. First, it leads to a recipe for optimally tuning the lossfunction and the regularization parameter. Second, it allows to preciselyquantify the sub-optimality of popular heuristic choices: for instance, we showthat optimally-tuned least-squares is (perhaps surprisingly) approximatelyoptimal for standard logistic data, but the sub-optimality gap growsdrastically as the signal strength increases. Third, we use the bounds toprecisely assess the merits of ridge-regularization as a function of theover-parameterization ratio. Notably, our bounds are expressed in terms of theFisher Information of random variables that are simple functions of the datadistribution, thus making ties to corresponding bounds in classical statistics.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×