5. Improving Model Accuracy

Overview

This chapter introduces the concept of regularization for neural networks. Regularization aims to prevent the model from overfitting the training data during the training process and provides more accurate results when the model is tested on new unseen data. You will learn to utilize different regularization techniques—L1 and L2 regularization and dropout regularization—to improve model performance. Regularization is an important component as it prevents neural networks from overfitting the training data and helps us build robust, accurate models that perform well on new, unseen data. By the end of this chapter, you will be able to implement a grid search and random search in scikit-learn and find the ...

Get The Deep Learning with Keras Workshop now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.