Appendix A. Interactive Neural Networks
This appendix explores fundamental notions of neural networks with basic Python code—on the basis of both simple and shallow neural networks. The goal is to provide a good grasp and intuition for important concepts that often disappear behind high-level, abstract APIs when working with standard machine and deep learning packages.
The appendix has the following sections:
-
“Tensors and Tensor Operations” covers the basics of tensors and the operations implemented on them.
-
“Simple Neural Networks” discusses simple neural networks, or neural networks that only have an input and an output layer.
-
“Shallow Neural Networks” focuses on shallow neural networks, or neural networks with one hidden layer.
Tensors and Tensor Operations
In addition to implementing several imports and configurations, the following Python code shows the four types of tensors relevant for the purposes of this appendix: scalar, vector, matrix, and cube tensors. Tensors are generally represented as potentially multidimensional ndarray objects in Python. For more details and examples, see Chollet (2017, ch. 2):
In[1]:importmathimportnumpyasnpimportpandasaspdfrompylabimportplt,mplnp.random.seed(1)plt.style.use('seaborn')mpl.rcParams['savefig.dpi']=300mpl.rcParams['font.family']='serif'np.set_printoptions(suppress=True)In[2]:t0=np.array(10)t0Out[2]:array(10)In[3]:t1=np.array((2,1))t1Out[3]:array([2,1])In[4