Custom training logic

As mentioned earlier, TF 2.0 brings about default eager execution, which means that legacy TF 1.x custom training logic implementations based on a graph-based code flow are now obsolete. To implement such custom training logic in TF 2.0 with regard to eager execution, tf.GradientTape can be used. The purpose of tf.GradientTape is to record operations for automatic differentiation or for computing the gradient of an operation or computation with respect to its input variables. This is done by using tf.GradientTape as a context manager. TensorFlow records all operations executed in the context of tf.GradientTape onto a tape, which is then, along with the gradients, associated with those operations to compute the gradient ...

Get What's New in TensorFlow 2.0 now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.