Training the neural network

By optimizing loss, we can get the training process to work. We need to reduce the difference between the actual label value and the network prediction; cross-entropy is the term used to define this loss. 

In TensorFlow, cross-entropy is provided by the following method:

tf.nn.softmax_cross_entropy_with_logits

This method applies softmax on the model's prediction. Softmax is similar to logistic regression and produces a decimal between 0 and 1.0. For example, a logistic regression output of 0.9 from an email classifier suggests a 90% chance of an email being spam and a 10% chance of it not being spam. And the sum of all the probabilities is 1.0, as shown with an example in the following table.

Softmax is implemented ...

Get Machine Learning Projects for Mobile Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.