December 2019
Intermediate to advanced
468 pages
14h 28m
English
Let's implement the DecoderRNN class—a basic decoder without attention. Again, we'll start with the constructor:
class DecoderRNN(torch.nn.Module): def __init__(self, hidden_size, output_size): super(DecoderRNN, self).__init__() self.hidden_size = hidden_size self.output_size = output_size # Embedding for the current input word self.embedding = torch.nn.Embedding(output_size, hidden_size) # decoder cell self.gru = torch.nn.GRU(hidden_size, hidden_size) # Current output word self.out = torch.nn.Linear(hidden_size, output_size) self.log_softmax = torch.nn.LogSoftmax(dim=1)
It's similar to the encoder—we have the initial self.embedding word embedding and the self.gru GRU cell. We also have the fully connected self.out ...
Read now
Unlock full access