August 2018
Intermediate to advanced
272 pages
7h 2m
English
One of the coolest features of VGG is that due to its small kernel size in the conv layers, the amount of parameters used is low. If we remember from Chapter 2, Deep Learning and Convolutional Neural Networks, the amount of parameters in a convolution layer (minus the bias) can be calculated as follows:
![]()
So, for example, the first layer would have the following parameters:
Beware, though, that this low number of parameters is not the case when it comes to the fully connected (dense) layers at the end of the ...