Relation networks in few-shot learning

We have seen how we take a single image belonging to each of the classes in the support set and compare their relation to the image in the query set in the one-shot learning setting of our relation network. But in a few-shot learning setting, we will have more than one data point per class. How do we learn the feature representation here using our embedding function?

Say we have a support set containing more than one image for each of the classes, as shown in the following diagram:

In this case, we will learn the embedding of each point in the support set and perform element-wise addition of embeddings ...

Get Hands-On Meta Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.