Multi-layer perceptron

Like pancakes, neural networks are made to be stacked with each other. We can make the output of a layer the input of the next layer, called a hidden layer. This hidden layer consists of a linear combination of the inputs to which an activation function is applied. This creates a new hidden vector which we can take as an input for the following hidden layer, on each step recombining the outputs from the previous layer by some weights and applying an activation function. 

Let's start by introducing the sigmoid function, which will be useful later:

library(R6)sigmoid <- function(x){  1/(1+exp(-x))}

The skeleton of the class is now:

MLP <- R6Class("MLP",                       public = list(                        dim = NULL,                        n_iter = NULL,                        learning_rate = NULL,

Get R Deep Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.