December 2019
Intermediate to advanced
468 pages
14h 28m
English
One of the main reasons for the advances in recent Deep Learning (DL) is its ability to run Neural Networks (NNs) very fast. This is in large part because of the good match between the nature of NN algorithms and the specifics of Graphical Processing Units (GPUs). In Chapter 1, The Nuts and Bolts of Neural Networks, we underscored the importance of matrix multiplication in NNs. As a testament to this, it is possible to transform the convolution into a matrix multiplication as well. Matrix multiplication is embarrassingly parallel (trust me, this is a term—you can Google it!). The computation of each output cell is not related to the computation of any other output cell. Therefore, we can compute all of the ...
Read now
Unlock full access