Skip to Content
Python Deep Learning - Second Edition
book

Python Deep Learning - Second Edition

by Ivan Vasilev, Daniel Slater, Gianmario Spacagna, Peter Roelants, Valentino Zocca
January 2019
Intermediate to advanced
386 pages
11h 13m
English
Packt Publishing
Content preview from Python Deep Learning - Second Edition

Training

Now that we have defined the input, targets, and the network architecture, let's implement the training:

  1. The first step is to define a loss function, which describes the cost of outputting a wrong sequence of characters, given the input and targets. Because we are predicting the next character considering the previous characters, it's a classification problem and we will use cross-entropy loss. We can do this with the sparse_softmax_cross_entropy_with_logits TF function, which takes as input the logit network output (before the softmax) and the targets as class labels. To reduce the loss over the full sequence and all the batches, we'll use their mean value. First, we have to flatten the targets into one-dimensional vectors to make ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Python Deep Learning

Python Deep Learning

Valentino Zocca, Gianmario Spacagna, Daniel Slater, Peter Roelants

Publisher Resources

ISBN: 9781789348460Supplemental Content