Cross-entropy loss (log loss)

The simplest form of image classification is binary classification. This is where we have a classifier that has just one object to classify, for example, dog/no dog. In this case, a loss function we are likely to use is the binary cross-entropy loss.

The cross entropy function between true labels p and model predictions q is defined as:

With i being the index for each possible element of our labels and predictions.

However, as we are dealing with the binary case when we have only two possible outcomes, y=1 and y=0, then p{} and q {} can be simplified down and we get:

This is equivalent 

Iterating over   training ...

Get Hands-On Convolutional Neural Networks with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.