Most of the neural networks that we design are feed forward and fully connected. This means that every neuron connects to every neuron in the next layer. The first layer receives inputs and the last layer gives outputs. The structure of the network, meaning the neuron counts and their connections, is decided ahead of time and cannot change, at least not during training. Also, every input must have the same number of values. This means that images, for example, may need to be resized to match the number of input neurons. The number of neurons in each layer is that layer's shape:
Each individual ...