Chapter 7. TensorFlow Abstractions and Simplifications
The aim of this chapter is to get you familiarized with important practical extensions to TensorFlow. We start by describing what abstractions are and why they are useful to us, followed by a brief review of some of the popular TensorFlow abstraction libraries. We then go into two of these libraries in more depth, demonstrating some of their core functionalities along with some examples.
Chapter Overview
As most readers probably know, the term abstraction in the context of programming refers to a layer of code “on top” of existing code that performs purpose-driven generalizations of the original code. Abstractions are formed by grouping and wrapping pieces of code that are related to some higher-order functionality in a way that conveniently reframes them together. The result is simplified code that is easier to write, read, and debug, and generally easier and faster to work with. In many cases TensorFlow abstractions not only make the code cleaner, but can also drastically reduce code length and as a result significantly cut development time.
To get us going, let’s illustrate this basic notion in the context of TensorFlow, and take another look at some code for building a CNN like we did in Chapter 4:
def
weight_variable
(
shape
):
initial
=
tf
.
truncated_normal
(
shape
,
stddev
=
0.1
)
return
tf
.
Variable
(
initial
)
def
bias_variable
(
shape
):
initial
=
tf
.
constant
(
0.1
,
shape
=
shape
)
return
tf
.
Variable
(
initial
)
def
conv2d ...
Get Learning TensorFlow now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.