Evaluating performance

We have explored a few deep learning models in earlier chapters. We got an accuracy rate of 98.36% in our image classification task on the MNIST dataset in Chapter 5, Image Classification Using Convolutional Neural Networks. For the binary classification task (predicting which customers will return in the next 14 days) in Chapter 4, Training Deep Prediction Models, we got an accuracy rate of 77.88%. But what does this actually mean and how do we evaluate the performance of a deep learning model?

The obvious starting point in evaluating whether your deep learning model has good predictive capability is by comparing it to other models. The MNIST dataset is used in a lot of benchmarks for deep learning research, so we ...

Get R Deep Learning Essentials now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.