Chapter 8. Time-Windowed Aggregate Features

In Chapter 7, we built a machine learning model but ran into problems when trying to scale it out and making it production ready. Briefly, here are the problems we encountered:

  1. One-hot encoding categorical columns caused an explosion in the size of the dataset.

  2. Embeddings would have involved separate training and bookkeeping.

  3. Putting the model into production would have required the machine learning library in environments to which it is not portable.

  4. Augmentation of the training dataset with streaming measures requires the same code to process both batch data and streaming data.

In the remaining three chapters of this book, we implement a real-time, streaming machine learning pipeline that avoids these issues by using Cloud Dataflow and Cloud AI Platform (hosted versions of Apache Beam and Tensorflow, respectively).

The Need for Time Averages

In this chapter, we solve the issue of augmenting the dataset with time-windowed aggregate features. To do that, we will use Apache Beam to compute aggregate features on historical data, and then (in Chapter 10) we use that same code at prediction time to compute the same aggregate features in real time.

What time-windowed aggregate features did we want to use, but couldn’t? Flight arrival times are scheduled based on the average taxi-out time experienced at the departure airport at that specific hour. For example, at peak hours in New York’s JFK airport, taxi-out times on the order ...

Get Data Science on the Google Cloud Platform now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.