Skip to Content
Deep Learning Quick Reference
book

Deep Learning Quick Reference

by Mike Bernico
March 2018
Intermediate to advanced
272 pages
7h 53m
English
Packt Publishing
Content preview from Deep Learning Quick Reference

Which hyperparameters should we optimize?

Even if you were to follow my advice above and settle on a good enough architecture, you can and should still attempt to search for ideal hyperparameters within that architecture. Some of the hyperparameters we might want to search include the following:

  • Our choice of optimizer. Thus far, I've been using Adam, but an rmsprop optimizer or a well-tuned SGD may do better.
  • Each of these optimizers has a set of hyperparameters that we might tune, such as learning rate, momentum, and decay.
  • Network weight initialization.
  • Neuron activation.
  • Regularization parameters such as dropout probability or the regularization parameter used in l2 regularization.
  • Batch size.

As implied above, this is not an exhaustive ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Keras Deep Learning Cookbook

Keras Deep Learning Cookbook

Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra
Deep Learning with Keras

Deep Learning with Keras

Antonio Gulli, Sujit Pal

Publisher Resources

ISBN: 9781788837996Supplemental Content