Experimenting with batch size, kernel size, and filters in CNNs

The code that will be used for this experiment is as follows:

# Model architecturemodel <- keras_model_sequential() %>%          layer_embedding(input_dim = 1500,                          output_dim = 32,                          input_length = 400) %>%          layer_conv_1d(filters = 64,                   kernel_size = 4,                   padding = "valid",                   activation = "relu",                   strides = 1) %>%          layer_max_pooling_1d(pool_size = 4) %>%          layer_dropout(0.25) %>%          layer_lstm(units = 32) %>%          layer_dense(units = 50, activation = "softmax")# Compiling the model model %>% compile(optimizer = "adam",            loss = "categorical_crossentropy",          metrics = c("acc")) # Fitting the model model_three <- model %>% fit(trainx, trainy,          epochs = 30,          batch_size = 8, validation_data = list(validx, validy)) ...

Get Advanced Deep Learning with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.