O'Reilly logo

Deep Learning with TensorFlow - Second Edition by Md. Rezaul Karim, Giancarlo Zaccone

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Tuning hyperparameters and advanced FFNNs

The flexibility of neural networks is also one of their main drawbacks: there are many hyperparameters to tweak. Even in a simple MLP, you can change the number of layers, the number of neurons per layer, and the type of activation function to use in each layer. You can also change the weight initialization logic, the drop out keep probability, and so on.

Additionally, some common problems in FFNNs, such as the gradient vanishing problem, and selecting the most suitable activation function, learning rate, and optimizer, are of prime importance.

Tuning FFNN hyperparameters

Hyperparameters are parameters that are not directly learned within estimators. It is possible and recommended that you search the hyperparameter ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required