Summary
In this chapter, we've become acquainted with artificial NNs and their main components. NNs are built from neurons that are usually organized in layers. A typical neuron performs a weighted sum of inputs and then applies a non-linear activation function on it to calculate its output. There are many different activation functions, but the most popular these days is ReLU and its modifications, due to their computational properties.
NNs are usually trained using the backpropagation algorithm, built on top of stochastic gradient descent. Feed-forward NNs with several layers are also known as multilayer perceptrons. MLPs can be used for classification tasks.
In the next chapter, we'll continue to discuss NNs, but this time we'll focus ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access