Without further ado, let's take a look at the most basic version of an RNN, referred to as a vanilla RNN. It looks as follows:
This looks somewhat familiar, doesn't it? It should. If we were to remove the loop, this would be the same as a traditional neural network, but with one hidden layer, which we've encountered already. Now, if we unroll the loop and view the full network, it looks as follows:
Here, we have the following parameters:
- xt is the input at time step t
- ht is the hidden state at time step t
- ot is the output at ...