5 AI trends to watch in 2018

From methods to tools to ethics, Ben Lorica looks at what's in store for artificial intelligence.

By Ben Lorica
January 9, 2018
Telescope Telescope (source: Hans via Pixabay)

What will 2018 bring in AI? Here’s what’s on our radar.

Expect substantial progress in machine learning methods, understanding, and pedagogy

As in recent years, new deep learning architectures and (distributed) training algorithms will lead to impressive results and applications in a range of domains, including computer vision, speech, and text. Expect to see companies make progress on efficient algorithms for training, inference, and data processing on edge devices. At the same time, collaboration between machine learning experts will produce interesting breakthroughs—examples include work that draws from Bayesian methods and deep learning and work on neuroevolution and gradient-based deep learning.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

However, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. Both researchers and practitioners are already hard at work addressing this challenge. We anticipate that in 2018 we’ll see even more people engage in improving theoretical understanding and pedagogy.

New developments and lowered costs in hardware will enable better data collection and faster deep learning

Deep learning is computationally intensive. As a result, much of the innovation in hardware pertains to deep learning training and inference (on both the edge and the server). Look for new processors, accompanying software frameworks and interconnects, and optimized systems assembled specifically to allow companies to speed up their deep learning experiments to emerge from established hardware companies, cloud providers, and startups in the West and in China.

But the data behind deep learning has to be collected somehow. Many industrial AI systems rely on specialized sensors—LIDAR for instance. Costs will continue to decline as startups produce alternative sensors and new methods for gathering and using data, such as high-volume, low-resolution data from edge devices and sensor fusion.

Developer tools for AI and deep learning will continue to evolve

TensorFlow remains by far the most popular deep learning library, but other frameworks like Caffe, PyTorch, and BigDL will continue to garner users and use cases. We also anticipate new deep learning tools to simplify architecture and hyperparameter tuning, distributed training, and model deployment and management. Others areas in which we expect progress include:

  • Simulators, such as digital twins, which allow developers to speed up the development of AI systems, along with the reinforcement learning libraries that integrate with them (the RL library that’s part of RISE Lab’s Ray is a great example)
  • Developer tools for building AI applications that can process multimodal inputs
  • Tools that target developers who aren’t data engineers or data scientists

We’ll see many more use cases for automation, particularly in the enterprise

As more companies enter the AI space, they’ll continue to find tasks that can be (semi) automated using existing tools and methods. A natural starting point will be low-skilled tasks that consume the time of high-skilled workers. Other applications include automation products using speech and natural language technologies, industrial automation and robotics, and use cases in health and medicine, such as drug discovery, medical assistants, and genomics.

People will also increasingly use automation for creative pursuits like AI-generated music, images, and visual arts, which will also start appearing in commercial products. And this innovation isn’t limited to the US. With government support for AI technologies, access to large datasets and users, and a highly competitive market that rewards early movers, Chinese companies and startups are poised to stake a claim in automation.

The AI community will continue to address concerns about privacy, ethics, and responsible AI

Last year, we predicted an increased attention to issues of ethics and privacy, and in 2018, we expect this to continue. Fairness, transparency, and explainability are essential for most commercial AI systems. There will be continued discussion about how to create tools to ensure responsible AI, such as those for combating “fake news.”

We’ll also see developments aimed at safety. Expect advances in reproducible algorithms from collaborations between researchers across a number of disciplines, particularly for mission-critical AI systems, which require error estimates. And with the General Data Protection Regulation (GDPR) set to kick off on May 25, we’ll see companies (like Apple) develop or improve privacy-preserving machine learning products.

Post topics: AI & ML
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter