Chapter 3. Techniques to Mitigate Security Risks
In the previous chapter, we looked at a number of different security risks to ML systems. We can do a lot to mitigate these risks, using techniques we will explore in this chapter.
General Security Practices
While machine learning systems have their own specific risks, many of these can be mitigated by existing general security practices. Foremost among these is threat modeling: considering the potential harms that could result from your model’s predictions, and taking action to prevent these harms from occurring. This can be a formal, structured process, or a simple meeting with stakeholders and experts. The important point is to gather opinions from a wide range of people with differing expertise, and implement their suggestions where possible.
Borrowing again from “traditional” cybersecurity, ML systems should also use best practices such as least privilege, multifactor authentication, and monitoring. A high-profile example the potential consequences of neglecting these practices comes from Clearview AI, where in April 2020 a misconfigured server allowed access to the company’s source code and training data. As we’ve seen, access to training data can enable an attacker to craft adversarial examples to suit that training data. Many risks in Chapter 2 stem from an attacker having access to a model’s API, and particular care should be taken to secure this correctly in consultation with your company’s security team.
Data Checks ...
Get Is Building Secure ML Possible? now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.