Evading intrusion detection systems with adversarial network systems

By now, you will have acquired a fair understanding of adversarial machine learning, and how to attack machine learning models. It's time to dive deep into more technical details, learning how to bypass machine learning based intrusion detection systems with Python. You will also learn how to defend against those attacks.

In this demonstration, you are going to learn how to attack the model with a poisoning attack. As discussed previously, we are going to inject malicious data, so that we can influence the learning outcome of the model. The following diagram illustrates how the poisoning attack will occur:

In this attack, we are going to use a Jacobian-Based Saliency Map ...

Get Mastering Machine Learning for Penetration Testing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.