How to train and deploy deep learning at scale

The O’Reilly Data Show Podcast: Ameet Talwalkar on large-scale machine learning.

By Ben Lorica
March 15, 2018
Neural network Neural network (source: Kevin Rheese on Flickr)

How to train and deploy deep learning at scale
Data Show Podcast

 
 
00:00 / 00:39:10
 
1X
 

In this episode of the Data Show, I spoke with Ameet Talwalkar, assistant professor of machine learning at CMU and co-founder of Determined AI. He was an early and key contributor to Spark MLlib and a member of AMPLab. Most recently, he helped conceive and organize the first edition of SysML, a new academic conference at the intersection of systems and machine learning (ML).

We discussed using and deploying deep learning at scale. This is an empirical era for machine learning, and, as I noted in an earlier article, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. In practice, machine learning engineers need to explore and experiment using different architectures and hyperparameters before they settle on a model that works for their specific use case. Training a single model usually involves big (labeled) data and big models; as such, exploring the space of possible model architectures and parameters can take days, weeks, or even months. Talwalkar has spent the last few years grappling with this problem as an academic researcher and as an entrepreneur. In this episode, he describes some of his related work on hyperparameter tuning, systems, and more.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Here are some highlights from our conversation:

Deep learning

I would say that you hear a lot about the modeling of problems associated with deep learning. How do I frame my problem as a machine learning problem? How do I pick my architecture? How do I debug things when things go wrong? … What we’ve seen in practice is that, maybe somewhat surprisingly, the biggest challenges that ML engineers face actually are due to the lack of tools and software for deep learning. These problems are sort of like hybrid systems/ML problems. Very similar to the sorts of research that came out of the AMPLab.

… Things like TensorFlow and Keras, and a lot of those other platforms that you mentioned, are great and they’re a great step forward. They’re really good at abstracting low-level details of a particular learning architecture. In five lines, you can describe how your architecture looks and then you can also specify what algorithms you want to use for training.

There are a lot of other systems challenges associated with actually going end to end, from data to a deployed model. The existing software solutions don’t really tackle a big set of these challenges. For example, regardless of the software you’re using, it takes days to weeks to train a deep learning model. There’s real open challenges of how to best use parallel and distributed computing both to train a particular model and in the context of tuning hyperparameters of different models.

We also found out the vast majority of organizations that we’ve spoken to in the last year or so who are using deep learning for what I’d call mission-critical problems, are actually doing it with on-premise hardware. Managing this hardware is a huge challenge and something that folks like me, if I’m working at a company with machine learning engineers, have to figure out for themselves. It’s kind of a mismatch between their interests and their skills, but it’s something they have to take care of.

Understanding distributed training

To give a little bit more background, the idea behind this work started about four years ago. There was no deep learning in Spark MLlib at the time. We were trying to figure out how to perform distributed training of deep learning in Spark. Before actually getting our hands really dirty and trying to actually implement anything we wanted to just do some back-of-the-envelope calculations to see what speed-ups you could hope to get.

… The two main ingredients here are just computation and communication. … We wanted to understand this landscape of distributed training, and, using Paleo, we’ve been able to get a good sense of this landscape without actually running experiments. The intuition is simple. The idea is that if we’re very careful in our bookkeeping, we can write down the full set of computational operations that are required for a particular neural network architecture when it’s performing training.

[Full disclosure: I’m an advisor to Determined AI.]

Related resources:

Post topics: AI & ML, Data, O'Reilly Data Show Podcast
Post tags: Podcast
Share:

Get the O’Reilly Radar Trends to Watch newsletter