Building MAML from scratch
In the last section, we saw how MAML works. We saw how MAML obtains a better and robust model parameter θ that can be generalizable across tasks. Now we will better understand MAML by coding it from scratch. For better understanding, we will consider a simple binary classification task. We randomly generate our input data and we train it with a simple single layer neural network and try to find the optimal parameter θ. Now, we will see, step-by-step, exactly how to do this:
You can also check the code available as a Jupyter Notebook with an explanation here: https://github.com/sudharsan13296/Hands-On-Meta-Learning-With-Python/blob/master/06.%20MAML%20and%20it's%20Variants/6.5%20Building%20MAML%20From%20Scratch.ipynb ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access