Skip to content
O'Reilly home
Tensorflow

High-Performance TensorFlow in Production

Develop hands-on experience optimizing and deploying Tensorflow models

This event has ended.

What you’ll learn and how you can apply it

You’ll understand:

  • The structure of a Tensorflow Model
  • Key components of Tensorflow Serving
  • How to optimize a Tensorflow Model for serving
  • How to tune Tensorflow Serving for performance
  • How to deploy Tensorflow models with Tensorflow Serving
  • How to version and rollback models with Tensorflow Serving

And you'll be able to:

  • Optimize trained Tensorflow Models to reduce prediction latency
  • Deploy trained Tensorflow Models to Tensorflow Serving in production
  • Tune the Tensorflow Serving runtime to increase prediction throughput
  • Version and roll-back models with Tensorflow Serving

This course is for you because…

  • You are a Software Engineer or Data Engineer with Intermediate Production-Deployment Experience and need to learn to deploy Tensorflow models to production.

  • You are a Data Scientist or Business Analyst with intermediate ML or AI experience and need to learn to optimize Tensorflow models for production deployment.

Prerequisites

  • Intermediate software engineering or data science skills.

Setup required prior to the first course meeting:

  1. The only requirement is a modern browser (ie. Chrome, Firefox, etc).
  2. Every attendee will get their own cloud instance accessible via their browser. The instructor will provide the IP addresses to each attendee at the beginning of the course. All work will be done using Jupyter notebooks running on each attendee’s assigned cloud instance.
  3. All work can be saved locally. The instructor will provide download instructions at the end of the course.

Schedule

The timeframes are only estimates and may vary according to how the class is progressing.

Day 1: TensorFlow Model Training - TensorFlow and GPUs - Inspect and Debug Models - Distributed Training Across a Cluster - Optimize Training with Queues, Dataset API, and JIT XLA Compiler

Day 2: TensorFlow Model Deploying and Serving Predictions - Optimize Predicting with AOT XLA and Graph Transform Tool (GTT) - Key Components of TensorFlow Serving - Optimize TensorFlow Serving Runtime

Your Instructor

  • Chris Fregly

    Chris Fregly is a San Francisco, California-based developer advocate for AI and machine learning at Amazon Web Services (AWS). He’s worked with Kubeflow and MLflow since 2017 and founded the global Advanced Kubeflow Meetup. Chris regularly speaks at ML/AI conferences across the world, including the O’Reilly AI and Strata Data Conferences. Previously, Chris was founder at PipelineAI, helping startups and enterprises continuously deploy AI and machine learning pipelines using Kubeflow and MLflow, and was an ML-focused engineer at both Netflix and Databricks.

Start your free 10-day trial

Get started

Want to learn more at events like these?

Get full access to O'Reilly online learning for 10 days—free.

  • checkmark50k+ videos, live online training, learning paths, books, and more.
  • checkmarkBuild playlists of content to share with friends and colleagues.
  • checkmarkLearn anywhere with our iOS and Android apps.
Start Free TrialNo credit card required.