Regularization is a technique that's used to reduce overfitting. It does this by adding an additional term to the model error function (model output – trained value) to prevent the model weight parameters from taking extreme values during training. Three types of regularization are used in CNNs:
- L1 regularization: For each model weight, w, an additional parameter, λ|w|, is added to the model objective. This regularization makes the weight factor sparse (close to zero) during optimization.
- L2 regularization: For each model weight, w, an additional parameter, 1/2λ w2, is added to the model objective. This regularization makes the weight factor diffused during optimization. L2 regularizations can be expected to give superior ...