This script consists of a Seq2seq dialogue generator model, which is used for the reverse model of the backward entropy loss. This will determine the semantic coherence reward for the policy gradients dialogue. Essentially, this script will help us to represent our future reward function. The script will achieve this via the following actions:
- Encoding
- Decoding
- Generating builds
All of the preceding actions are based on long short-term memory (LSTM) units.
The feature extractor script helps with the extraction of features and characteristics from the data, in order to help us train it better. Let us start by importing the required modules.
import tensorflow as tf import numpy as np import re
Next, define the model inputs. ...