Training the neural network

By optimizing loss, we can get the training process to work. We need to reduce the difference between the actual label value and the network prediction. The term to define this loss is cross entropy.

In TensorFlow, cross entropy is provided by the following method:

tf.nn.softmax_cross_entropy_with_logits

This method applies softmax to the model's prediction. Softmax is similar to logistic regression, and produces a decimal between 0 and 1.0. For example, a logistic regression output of 0.9 from an email classifier suggests a 90% chance of an email being spam and a 10% chance of it not being spam. The sum of all the probabilities is 1.0, as shown with an example in the following table:

Softmax is implemented through ...

Get Mobile Artificial Intelligence Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.