August 2018
Intermediate to advanced
378 pages
9h 9m
English
As we have seen in previous chapters, one approach to preventing overfitting is to use penalties, that is, regularization. In general, our goal is to minimize the reconstruction error. If we have an objective function, F, we may optimize F(y, f(x)), where f() encodes the raw data inputs to generate predicted or expected y values. For auto-encoders, we have F(x, g(f(x))), so that the machine learns the weights and functional form of f() and g() to minimize the discrepancy between x and the reconstruction of x, namely g(f(x)). If we want to use an overcomplete auto-encoder, we need to introduce some form of regularization to force the machine to learn a representation that does not simply mirror the input. For example, ...