Apache MXNet—the fruit of cross-institutional collaboration

MXNet’s origins show through in its power and flexibility.

By Timothy McGovern
September 25, 2017
King's Cross Station King's Cross Station (source: Pixabay)

The Apache MXNet framework (incubating at the Apache Software Foundation) was developed to enable multiple approaches to the problem of deep learning. One route for reducing the time it takes to train deep learning models involves defining the model and separating it from the algorithm. While this approach speeds training, it can add constraints and complexity because it is hard to update as understanding of the problem improves. Other neural network libraries address this by adding more flexibility, but at the cost of training speed.

Perhaps appropriately, the idea of taking more than one route to tackling a problem is paralleled in other technical aspects of the framework, and in the Apache MXNet community itself. MXNet gives developers the best of both worlds. It provides a concise, easy to understand, dynamic programming interface for defining both the model and the algorithm, without sacrificing training speed.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

The MXNet community has deep roots in cross-institutional collaboration: the original MXNet publication had authors from 10 institutions. The project has made the most of the variety of backgrounds, with each team developing the project to meet its own needs and institutional traditions and commitments. It should not be surprising that MXNet was “born” with APIs for C++, Python, R, Scala, Matlab, Javascript, Go, and Julia.

While the academic origins of MXNet are recent, the breadth of its contributor base means MXNet not only supports building products in a variety of languages, but also a variety of computational hardware. For training models, this means GPUs, naturally; and for inference, this can mean running models on anything from mobile devices to other lightweight general-purpose computers (e.g., Raspberry Pi) to purpose-specific FPGAs or other IoT device architectures. MXNet’s design and community are both well-positioned to stay on top of all these developments.​

The initial variety of contributors and approaches to neural networks has continued to evolve. Like other deep learning frameworks, the main contributors and users of Apache MXNet are either providing analytics as a service (and keeping MXNet “under the hood”) or building customized pipelines for a vertical-specific AI application. Among the latter, we see TuSimple building an autonomous driving platform, TwoSense with a behavioral biometric and identification tool, and some teams within Amazon (fulfillment center management and robotics, for example, or Sockeye, the sequence-to-sequence machine translation framework).

Apache MXNet is also used as part of larger analytics suites, like Wolfram, which includes a high-level front end for MXNet in its latest release (Wolfram Research is also a significant contributor of code to the project). Microsoft is taking the lead on integrating MXNet into the R language (among other things). Several of Amazon’s products use MXNet, including Amazon Rekognition for image analysis, Amazon Echo products, including the Echo Look fashion assistant, Amazon Lex, and the Amazon.com recommendation engine. And with the release of Core ML, Apple is contributing to Apache MXNet to bring deep learning models to Apple devices.

Deep learning is the most disruptive technology in 2017—no longer the exclusive domain of academic researchers, it is now expected to be on the roadmap of any data-driven organization. The power and flexibility of MXNet make it possible to build prototypes and implement them in a variety of production environments.

Note: Apache MXNet is an effort undergoing incubation at the Apache Software Foundation (ASF). For more information, visit the project website.

Post topics: Artificial Intelligence
Share: