Chapter 10. ML Models and Security

Security is not altogether separate from privacy, but in the context of this discussion, I approach security as a specific problem: when we release a model in which an adversary is able to make it behave in a way we did not anticipate. I speak broadly of adversarial attacks, for example, an adversary is able to engineer an incorrect classification with a specific incorrect target in mind.

In one example, merely rotating photographs of potentially cancerous lesions changed the results of a machine learning classification as to whether the image showed a cancerous lesion. As Beat Buesser pointed out at a PyCon UK presentation in 2018, rotating an image is hardly illegal, so this shows just how easily some machine learning algorithms can be gamed with legal and seemingly legitimate manipulations.

Importantly, this brief discussion of security aspects of machine learning in no way provides any sort of checklist for your own security considerations, for a number of reasons. First, security related to machine learning goes far beyond aspects of machine learning itself. For example, good data protection and cybersecurity measures are obviously a fundamental aspect of security for any machine learning product, and we make no discussion of these topics here. Secondly, these topics are quite complex and could themselves, even in the limited range of topics I discuss, easily turn into several books of material.

Security is also a more complex topic than ...

Get Practical Fairness now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.