Evaluating Machine Learning Models

A Beginner's Guide to Key Concepts and Pitfalls

Publisher: O'Reilly
Released: September 2015

Data science today is a lot like the Wild West: there’s endless opportunity and excitement, but also a lot of chaos and confusion. If you’re new to data science and applied machine learning, evaluating a machine-learning model can seem pretty overwhelming. Now you have help. With this O’Reilly report, machine-learning expert Alice Zheng takes you through the model evaluation basics.

In this overview, Zheng first introduces the machine-learning workflow, and then dives into evaluation metrics and model selection. The latter half of the report focuses on hyperparameter tuning and A/B testing, which may benefit more seasoned machine-learning practitioners.

With this report, you will:

  • Learn the stages involved when developing a machine-learning model for use in a software application
  • Understand the metrics used for supervised learning models, including classification, regression, and ranking
  • Walk through evaluation mechanisms, such as hold?out validation, cross-validation, and bootstrapping
  • Explore hyperparameter tuning in detail, and discover why it’s so difficult
  • Learn the pitfalls of A/B testing, and examine a promising alternative: multi-armed bandits
  • Get suggestions for further reading, as well as useful software packages

Alice Zheng is the Director of Data Science at Dato, a Seattle-based startup that offers powerful large-scale machine learning and graph analytics tools. A tool builder and an expert in machine-learning algorithms, her research spans software diagnosis, computer network security, and social network analysis.

Get Immediate Access Now

FREE Ebook from O'Reilly

Formats: ePub, Mobi, PDF

First Name:

Last Name:

Email Address:

We protect your privacy.