6Input/Output, Hidden Layer and Bias

The probability of A conditional on B is defined as P(A|B) = P(A ∩ B)/P(B), with P(B) > 0

Definition of Bayes' Theorem

Based on almost all probabilistic approaches, such as Bayes', the explicit thinking, schematized by the possibility tree or by a flow chart diagram, is introduced to express the events through the probabilities of occurrence. On the other hand, the Neural Network gets the ability to produce results without flow chart diagram definition, but the net performs the algorithms well if the inputs, connections between its nodes, and external reinforcements have been correctly represented. Therefore, in this chapter, the Neural Network will be introduced as a paradigm of connections. For those of you that made it this far, a note of respect for your intellectual curiosity. We are at the core of our overview of neural networks, and we will now look at their internal mechanisms. This will help us understand what makes them such useful computational tools for solving problems of classification, prediction, optimisation and more.

Some of our readers might find that the next chapters refer to long lost concepts learnt during their university years. The blog associated with this book will come to your rescue by featuring many articles explaining concepts such as sum-product matrices, bias and curve fitting.

6.1 Input/Output

To build a neural network, we need details on the number of input and output data sets and information on their ...

Get Systems Engineering Neural Networks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.