July 2017
Intermediate to advanced
360 pages
8h 26m
English
A fully connected (sometimes called dense) layer is made up of n neurons and each of them receives all the output values coming from the previous layer (like the hidden layer in a MLP). It can be characterized by a weight matrix, a bias vector, and an activation function:
They are normally used as intermediate or output layers, in particular when it's necessary to represent a probability distribution. For example, a deep architecture could be employed for an image classification with m output classes. In this case, the softmax activation function allows having an output vector where each element is the probability of ...
Read now
Unlock full access