Bidirectional RNNs

Bidirectional RNNs are based on the idea that the output at time t may depend on previous and future elements in the sequence. To realize this, the output of two RNN must be mixed--one executes the process in a direction and the second runs the process in the opposite direction.

The network splits neurons of a regular RNN into two directions, one for positive time direction (forward states), and another for negative time direction (backward states). By this structure, the output layer can get information from past and future states.

The unrolled architecture of B-RNN is depicted in the following figure:

Unrolled bidirectional ...

Get Deep Learning with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.