Chapter 7. Learning All Possible Policies with Entropy Methods
Deep reinforcement learning (RL) is a standard tool due to its ability to process and approximate complex observations, which result in elaborate behaviors. However, many deep RL methods optimize for a deterministic policy, since if you had full observability, there is only one best policy. But it is often desirable to learn a stochastic policy or probabilistic behaviors to improve robustness and deal with stochastic environments.
What Is Entropy?
Shannon entropy (abbreviated to entropy from now on) is a measure of the amount of information contained within a stochastic variable, where information is calculated as the number of bits required to encode all possible states. Equation 7-1 shows this as an equation where is a stochastic variable, is the entropy, is the information content, and is the base of the logarithm used (commonly bits for , bans for , and nats for ). Bits are the most common base.
Equation 7-1. The information content of a random variable
For example, a coin has two states, assuming it doesn’t land on its edge. These two states can be encoded by a zero and a one, therefore the amount of information contained within a coin, measured by entropy in bits, is one. A die has six possible states, so you would need three bits to describe all of those states ...
Get Reinforcement Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.