CAML
We have seen how MAML finds the optimal initial parameter of a model so that it can easily be adaptable to a new task with fewer gradient steps. Now, we will see an interesting variant of MAML called CAML. The idea of CAML is very simple, same as MAML; it also tries to find the better initial parameter. We learned how MAML uses two loops; on the inner loop, MAML learns the parameter specific to the task and tries to minimize the loss using gradient descent and, on the outer loop, it updates the model parameter to reduce the expected loss across several tasks so that we can use the updated model parameter as better initializations for related tasks.
In CAML, we perform a very small tweak to the MAML algorithm. Here, instead of using a ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access