Chapter 3

Probability Learning and Memory

Abstract

Many key elements that determine RELR's probability estimates, which are consistent with both Bayesian and frequentist views of probability, were introduced in this chapter. RELR's Bayesian online learning and memory abilities were introduced as a special case of the KL minimum divergence subject to constraints formulation that allows prior probabilities to influence new learning. This minimum KL divergence method generalizes RELR's maximum entropy and maximum likelihood methods to situations where prior probabilities are not everywhere equal. RELR's minimal KL divergence online learning has a memory for previous historical observation episodes which is stored in the prior distribution q or equivalent ...

Get Calculus of Thought now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.