In the previous section, we learned about performing the following steps on top of the input data to come up with error values in forward-propagation (the code file is available as Neural_network_working_details.ipynb in GitHub):
- Initialize weights randomly
- Calculate the hidden layer unit values by multiplying input values with weights
- Perform activation on the hidden layer values
- Connect the hidden layer values to the output layer
- Calculate the squared error loss
A function to calculate the squared error loss values across all data points is as follows:
import numpy as npdef feed_forward(inputs, outputs, weights): pre_hidden = np.dot(inputs,weights[0])+ weights[1] hidden = 1/(1+np.exp(-pre_hidden)) out = np.dot(hidden, ...