In this second experiment to improve the performance of the classification model, we will add an extra LSTM layer. Let's have a look at the following code:
# Model architecturemodel <- keras_model_sequential() %>% layer_embedding(input_dim = 500, output_dim = 32) %>% layer_lstm(units = 32, return_sequences = TRUE) %>% layer_lstm(units = 32) %>% layer_dense(units = 1, activation = "sigmoid")# Compiling modelmodel %>% compile(optimizer = "adam", loss = "binary_crossentropy", metrics = c("acc"))# Fitting modelmodel_three <- model %>% fit(train_x, train_y, epochs = 10, batch_size = 128, validation_split = 0.2)# Loss and accuracy plotplot(model_three)
By adding an extra LSTM layer ...