Setting benchmarks in machine learning

Dave Patterson and other industry leaders discuss how MLPerf will define an entire suite of benchmarks to measure performance of software, hardware, and cloud systems.

By Roger Chen
May 16, 2018
Gauge Gauge (source: Reto Scheiwiller on Pexels)

Machine learning has rapidly become one of the most impactful fields in industry. For that reason, it has also become one of the most competitive. However, everyone stands to benefit from setting common goals and standards to drive the entire field forward in a non-zero-sum fashion. With MLPerf, a cooperative of leading academic and industry institutions have come together to do just that.

Because machine learning is such a diverse field, MLPerf will define an entire suite of benchmarks to measure performance of software, hardware, and cloud systems. The different types of machine learning problems covered range from image classification to translation to reinforcement learning. MLPerf will feature two divisions for benchmarking. The Closed Division fixes both models and hyper parameters to fairly compare performance across different hardware systems and software frameworks. The Open Division relieves those constraints, and the hope is that best-in-class models and hyperparameters emerging from the Open Division might then set the standards for next-generation Closed Division testing.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Perhaps the most potent ingredient of this ambitious new effort is the roster of industry-leading players that have already come together. In this interview, you will have a chance to hear from some of them, including Turing Award winner Dave Patterson.

Related:

Post topics: Artificial Intelligence
Share: