Let's start by declaring a few variables that will be required for the model's configuration:
- First, we define the image's size in terms of height, width, and the number of channels. Since we are doing our analysis on colored images, we keep the number of channels to 3, meaning RGB mode. We also define the shape of the latent space vectors:
latent_dim <- 32height <- 32width <- 32channels <- 3
- Next, we create the generator network. The generator network maps random vectors of shape latent_dim to images of the input size, which in our case is (32, 32, 3):
input_generator <- layer_input(shape = c(latent_dim))output_generator <- input_generator %>% # We transform the input data into a 16x16 128-channels feature map initially ...