© Tanmay Agrawal 2021
T. AgrawalHyperparameter Optimization in Machine Learninghttps://doi.org/10.1007/978-1-4842-6579-6_4

4. Bayesian Optimization

Tanay Agrawal1  
(1)
Bangalore, Karnataka, India
 

In Chapters 2 and 3 we explored several hyperparameter tuning methods. Grid search and random search were quite straightforward, and we discussed how to distribute them to save memory and time. We also delved into some more-complex algorithms, such as HyperBand. But none of the algorithms that we reviewed learned from their previous history. Suppose an algorithm could keep a log of all the previous observations and learn from them. For example, suppose it could observe that our model is being optimized near certain values of hyperparameters and could exploit ...

Get Hyperparameter Optimization in Machine Learning: Make Your Machine Learning and Deep Learning Models More Efficient now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.