Before focusing on the entire model, let's take a look at how the transformer attention is implemented:
The transformer uses dot product attention (the left-hand side diagram of the preceding diagram), which follows the general attention procedure we introduced in the Seq2seq with attention section (as we have already mentioned, it is not restricted to RNN models). We can define it with the following formula:
In practice, we'll compute the attention function over a set of queries simultaneously, ...