Chapter 6

Iterative Algorithms

DOI: 10.1201/9781003158745-6

As discussed in Chapters 2 and 5, the model selection paradigm provides statistically optimal estimators, but with a prohibitive computational cost. A classical recipe is then to convexify the minimization problem issued from model selection, in order to get estimators that can be easily computed with standard tools from convex optimization. This approach has been successfully implemented in many settings, including the coordinate-sparse setting (Lasso estimator) and the group-sparse setting (Group-Lasso estimator). As illustrated on page 102, the bias introduced by the convexifica-tion is a recurrent issue with this approach, and for some complex problems, even if the minimization ...

Get Introduction to High-Dimensional Statistics, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.