Experimenting with the Adam optimizer

We will use the adam (Adaptive Moment Optimization) optimizer instead of the rmsprop (Root Mean Square Propagation) optimizer that we used earlier when compiling the model. To make a comparison of model performance easier, we will keep everything else the same as earlier, as shown in the following code:

# Model architecturemodel <- keras_model_sequential() %>%         layer_embedding(input_dim = 500, output_dim = 32) %>%         layer_lstm(units = 32) %>%         layer_dense(units = 1, activation = "sigmoid")# Compilemodel %>% compile(optimizer = "adam",           loss = "binary_crossentropy",         metrics = c("acc"))# Fit modelmodel_two <- model %>% fit(train_x, train_y,         epochs = 10,         batch_size = 128,         validation_split = 0.2)plot(model_two) ...

Get Advanced Deep Learning with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.