Checking in on TensorFlow 2.0: Keras, API cleanup, and more
Paige Bailey, TensorFlow product manager at Google, highlights notable features of TensorFlow 2.0 and looks ahead to near-term updates.
With the recent release of TensorFlow 2.0 and TensorFlow World coming soon, we talked to Paige Bailey, TensorFlow product manager at Google, to learn how TensorFlow has evolved and where it and machine learning (ML) are heading. She also gave us a rundown on notable updates in TensorFlow 2.0.
TensorFlow was open sourced by Google in 2015 to improve the speed, scalability, and usability for machine learning researchers who were interested in prototyping algorithms in Python rather than C++.
Since then, TensorFlow has been adopted across platforms and industries as an end-to-end ML platform for production workloads at scale. “As an AI-first company, this is incredibly important to Google,” Bailey says. “We use TensorFlow for almost all of our production machine learning. If you’ve used autocomplete in Gmail, or have seen suggested restaurants when you search for food near you, you’ve used TensorFlow.”
Key features of TensorFlow 2.0
We asked Bailey to call out some of the most notable features of TensorFlow 2.0.
Tf.keras is the recommended high-level API: “Keras is a beloved machine-learning API, and how most of us first learned TensorFlow,” Bailey says. “Tf.keras maps machine learning concepts into just a few lines of code, so you don’t need a graduate degree in computer science in order to get started with deep learning. A good example: one of our youngest TensorFlow contributors is only 10 years old!”
API improvements: “TensorFlow 2.0 has done a great deal of API cleanup,” Bailey says, “including removing redundant symbols and providing consistent naming conventions.”
Eager execution: “Eager execution makes the experience of building machine learning models more Pythonic, and it allows developers to immediately inspect their variables and other model components,” she says.
Standardizing on SavedModel: “Using SavedModel gives developers the ability to create a single model that can then be deployed to browsers, mobile devices, and on servers,” Bailey notes.
The near-term future of TensorFlow and ML
Bailey says to stay tuned for additional TensorFlow updates, including:
- Integration with compiler technology like Multi-Level Intermediate Representation (MLIR).
- More documentation, tutorials, and code samples on the TensorFlow website.
- Tensor Processing Unit (TPU) pod support for tf.keras, allowing for models to be distributed with just a two-line code change.
Looking ahead to the broader development of ML, Bailey sees solutions emerging that will address the complexity of ML deployment.
“We are seeing an explosion of domain-specific accelerated hardware, and an ever-growing list of desired deployment targets for machine learning models,” Bailey says. “Today, it is very difficult to understand which models can be deployed where—and how that deployment target will impact model performance. The MLIR project aims to reduce this complexity, and gives developers the ability to create one model—say, in tf.keras, or in scikit-learn—and to deploy anywhere and have the ops accelerated on any hardware.”
What to watch for at TensorFlow World
“I’m excited for many big announcements,” Bailey says. “But I’m particularly interested in seeing how each cloud platform—AWS, Azure, Google Cloud—is using TensorFlow. We’re also holding our first-ever TensorFlow Contributor Summit. I’m looking forward to meeting all of the special interest group leaders in person.”