CHAPTER 8

RIDGE REGRESSION

Outline

8.1 Model Specification

8.2 Proposed Estimators

8.3 Bias, MSE, and Risk Expressions

8.4 Performance of the Estimators

8.5 Choice of Ridge Parameter

8.6 Problems

The area of shrinkage estimators came to the forefront of statistical literature soon after Stein (1956) discovered that the sample mean in a multivariate model is not admissible, under a quadratic loss function, for dimension more than two. The idea took a decade to settle in to statistical methodology. Another class of shrinkage estimators appeared in the statistical literature in the 1970’s due to Hoerl and Kennard (1970). The methodology is an advancement for linear models. The idea is simple but its impact is great: that the ordinary least squares estimates (OLSEs) are unbiased and the covariance matrix is dependent on the design matrix X, which may be ill-conditioned, that is, some of the eigenvalues may be zero or near zero, which impacts on the variance of the OLSEs very large to make the OLSEs useless. To overcome this problem, Hoerl and Kennard (1970) put forward the idea of using the (X′X + kIp)−1 Xy instead of (X′X)−1 X′y for the estimation of the coefficients of a regression model, as in Chapter 7. These estimators are called “ridge regression estimators” (RREs), where k is called the tuning/biasing/ridge parameter, which is traditionally known as the “ridge parameter.” In this chapter, we consider the regression model and apply the ridge regression methodology when the error ...

Get Statistical Inference for Models with Multivariate t-Distributed Errors now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.