Building ADML from scratch
In the last section, we saw how ADML works. We saw how we train our model with both clean and adversarial samples to obtain a better and robust model parameter θ that is generalizable across tasks. Now we will better understand ADML by coding them from scratch. For better understanding, we will consider a simple binary classification task. We randomly generate our input data and we train it with a single layer neural network and try to find the optimal parameter theta. Now, we will see step-by-step how exactly ADML works.
You can also check the code available as a Jupyter Notebook with an explanation here: https://github.com/sudharsan13296/Hands-On-Meta-Learning-With-Python/blob/master/06.%20MAML%20and%20it's%20Variants/6.7%20Building%20ADML%20From%20Scratch.ipynb ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access