Chapter 5. The Probabilistic Machine Learning Framework
Probability theory is nothing but common sense reduced to calculation.
—Pierre-Simon Laplace, chief contributor to epistemic statistics and probabilistic inference
Recall the inverse probability rule from Chapter 2, which states that given a hypothesis H about a model parameter and some observed dataset D:
P(H|D) = P(D|H) × P(H) / P(D)
It is simply amazing that this trivial reformulation of the product rule is the foundation on which the complex structures of epistemic inference in general, and probabilistic machine learning (PML) in particular, are built. It is the fundamental reason why both these structures are mathematically sound and logically cohesive. On closer examination, we will see that the inverse probability rule combines conditional and unconditional probabilities in profound ways.
In this chapter, we will analyze and reflect on each term in the rule to gain a better understanding of it. We will also explore how these terms satisfy each of the requirements for the next generation of ML framework for finance and investing that we outlined in Chapter 1.
Applying the inverse probability rule to real-world problems is nontrivial for two reasons: logical and computational. As was explained in Chapter 4, our minds are not very good at processing probabilities, especially conditional ones. Also mentioned was the fact that P(D), the denominator in the inverse probability rule, is a normalizing constant that is analytically ...
Get Probabilistic Machine Learning for Finance and Investing now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.