The transformer attention

Before focusing on the entire model, let's take a look at how the transformer attention is implemented:

Left: Scaled dot product (multiplicative) attention; right: Multihead attention; source: https://arxiv.org/abs/1706.03762

The transformer uses dot product attention (the left-hand side diagram of the preceding diagram), which follows the general attention procedure we introduced in the Seq2seq with attention section (as we have already mentioned, it is not restricted to RNN models). We can define it with the following formula:

In practice, we'll compute the attention function over a set of queries simultaneously, ...

Get Advanced Deep Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.