How to build privacy and security into deep learning models

Video description

In recent years, we’ve seen tremendous improvements in artificial intelligence, due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. These issues will drastically impact the future of AI research—specifically how neural-based models are developed, deployed, and evaluated.

Yishay Carmiel (IntelligentWire) shares techniques and explains how data privacy will impact machine learning development and how future training and inference will be affected. Yishay first dives into why training on private data should be addressed, federated learning, and differential privacy. He then discusses why inference on private data should be addressed, homomorphic encryption and neural networks, a polynomial approximation of neural networks, protecting data in neural networks, data reconstruction from neural networks, and methods and techniques to secure data reconstruction from neural networks.

This session was recorded at the 2019 O'Reilly Artificial Intelligence Conference in New York.

Product information

  • Title: How to build privacy and security into deep learning models
  • Author(s): Yishay Carmiel
  • Release date: October 2019
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 0636920339342