Skip to Content
Reinforcement Learning with TensorFlow
book

Reinforcement Learning with TensorFlow

by Sayon Dutta
April 2018
Intermediate to advanced content levelIntermediate to advanced
334 pages
10h 18m
English
Packt Publishing
Content preview from Reinforcement Learning with TensorFlow

The rectified linear unit function

The rectified linear unit, better known as ReLU, is the most widely used activation function:

The ReLU function has the advantage of being non linear. Thus, backpropagation is easy and can therefore stack multiple hidden layers activated by the ReLU function, where for x<=0, the function f(x) = 0 and for x>0, f(x)=x.

The main advantage of the ReLU function over other activation functions is that it does not activate all the neurons at the same time. This can be observed from the preceding graph of the ReLU ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Deep Learning with TensorFlow - Second Edition

Deep Learning with TensorFlow - Second Edition

Giancarlo Zaccone, Vihan Jain, Md. Rezaul Karim, Motaz Saad
Deep Learning with TensorFlow 2 and Keras - Second Edition

Deep Learning with TensorFlow 2 and Keras - Second Edition

Antonio Gulli, Dr. Amita Kapoor, Sujit Pal

Publisher Resources

ISBN: 9781788835725Supplemental Content