© Umberto Michelucci 2022
U. MichelucciApplied Deep Learning with TensorFlow 2https://doi.org/10.1007/978-1-4842-8020-1_6

6. Hyper-Parameter Tuning

Umberto Michelucci1  
(1)
Dübendorf, Switzerland
 

Hyper-parameters can be loosely defined as parameters that do not change during training. For example, the number of layers in an FFNN, the number of neurons in each layer, activation functions, learning rate,1 and so on.

This chapter looks at the problem of finding the best hyper-parameters to get the best results from your models. Doing this is called hyper-parameter tuning. We first describe what a black-box optimization problem is, and how those classes of problems relate to hyper-parameter tuning. We look at the three most common methods for tackling ...

Get Applied Deep Learning with TensorFlow 2: Learn to Implement Advanced Deep Learning Techniques with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.