Skip to Content
R Deep Learning Essentials - Second Edition
book

R Deep Learning Essentials - Second Edition

by Mark Hodnett, Joshua F. Wiley
August 2018
Intermediate to advanced
378 pages
9h 9m
English
Packt Publishing
Content preview from R Deep Learning Essentials - Second Edition

L1 penalty

The basic concept of the L1 penalty, also known as the least-absolute shrinkage and selection operator (LassoHastie, T., Tibshirani, R., and Friedman, J. (2009)), is that a penalty is used to shrink weights toward zero. The penalty term uses the sum of the absolute weights, so some weights may get shrunken to zero. This means that Lasso can also be used as a type of variable selection. The strength of the penalty is controlled by a hyper-parameter, alpha (λ), which multiplies the sum of the absolute weights, and it can be a fixed value or, as with other hyper-parameters, optimized using cross-validation or some similar approach.

It is easier to describe Lasso if we use an ordinary least squares (OLS) regression model. In regression, ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

R Deep Learning Cookbook

R Deep Learning Cookbook

PKS Prakash, Achyutuni Sri Krishna Rao
Hands-On Deep Learning with R

Hands-On Deep Learning with R

Rodger Devine, Michael Pawlus
R: Unleash Machine Learning Techniques

R: Unleash Machine Learning Techniques

Raghav Bali, Dipanjan Sarkar, Brett Lantz, Cory Lesmeister
Deep Learning with R Cookbook

Deep Learning with R Cookbook

Swarna Gupta, Rehan Ali Ansari, Dipayan Sarkar

Publisher Resources

ISBN: 9781788992893Supplemental Content