July 2017
Beginner to intermediate
442 pages
10h 8m
English
In linear regression, only the residual sum of squares (RSS) is minimized, whereas in ridge and lasso regression, a penalty is applied (also known as shrinkage penalty) on coefficient values to regularize the coefficients with the tuning parameter λ.
When λ=0, the penalty has no impact, ridge/lasso produces the same result as linear regression, whereas λ -> ∞ will bring coefficients to zero:


Before we go deeper into ridge and lasso, it is worth understanding some concepts on Lagrangian ...