December 2019
Intermediate to advanced
468 pages
14h 28m
English
One of the criticisms of NNs is that their results aren't interpretable. It's common to think of a NN as a black box whose internal logic is hidden from us. This could be a serious problem. On the one hand, it's less likely we trust an algorithm that works in a way we don't understand, while on the other hand, it's hard to improve the accuracy of CNNs if we don't know how they work. Because of this, in the upcoming sections, we'll discuss two methods of visualizing the internal layers of a CNN, both of which will help us to gain insight into the way they learn.
Read now
Unlock full access