The following script contains the policy gradient model, which will be used where it combines reinforcement learning rewards with the cross-entropy loss. The dependencies include numpy and tensorflow. Our policy gradient is based on an LSTM encoder-decoder. We will use a stochastic demonstration of our policy gradient, which will be a probability distribution of actions over specified states. The script represents all of these, and specifies the policy gradient loss to be minimized.
Run the output of the first cell through the second cell; the input is concatenated with zeros. The final state for the responses mostly consists of two components—the latent representation of the input by the encoder, and the state of the decoder, ...