Video description
In recent years, we’ve seen tremendous improvements in artificial intelligence, due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. These issues will drastically impact the future of AI research—specifically how neural-based models are developed, deployed, and evaluated.
Yishay Carmiel (IntelligentWire) shares techniques and explains how data privacy will impact machine learning development and how future training and inference will be affected. Yishay first dives into why training on private data should be addressed, federated learning, and differential privacy. He then discusses why inference on private data should be addressed, homomorphic encryption and neural networks, a polynomial approximation of neural networks, protecting data in neural networks, data reconstruction from neural networks, and methods and techniques to secure data reconstruction from neural networks.
This session was recorded at the 2019 O'Reilly Artificial Intelligence Conference in New York.
Product information
- Title: How to build privacy and security into deep learning models
- Author(s):
- Release date: October 2019
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0636920339342
You might also like
video
TensorFlow Privacy: Learning with differential privacy for training data
When evaluating ML models, it can be difficult to tell the difference between what the models …
video
Building Recommender Systems with Machine Learning and AI
This course will teach you how to use Python, artificial intelligence (AI), machine learning, and deep …
video
Data Analytics and Machine Learning Fundamentals LiveLessons Video Training
More than 7.5 Hours of Video Instruction Overview Nearly every company in the world is evaluating …
book
Deep Learning with PyTorch
Every other day we hear about new ways to put deep learning to good use: improved …