Chapter 5. The Softmax Algorithm
Introducing the Softmax Algorithm
If you’ve completed the exercises for Chapter 2, you should have discovered that there’s an obvious problem with the epsilon-Greedy algorithm: it explores options completely at random without any concern about their merits. For example, in one scenario (call it Scenario A), you might have two arms, one of which rewards you 10% of the time and the other rewards you 13% of the time. In Scenario B, the two arms might reward you 10% of the time and 99% of the time. In both of these scenarios, the probability that the epsilon-Greedy algorithm explores the worse arm is exactly the same (it’s epsilon / 2
), despite the inferior arm in Scenario B being, in relative terms, much worse than the inferior arm in Scenario A.
This is a problem for several reasons:
- If the difference in reward rates between two arms is small, you’ll need to explore a lot more often than 10% of the time to correctly determine which of the two options is actually better.
-
In contrast, if the difference is large, you need to explore a lot less than 10% of the time to correctly estimate the better of the two options. For that reason, you’ll end up losing a lot of reward by exploring an unambiguously inferior option in this case. When we first described the epsilon-Greedy algorithm, we said that we wouldn’t set
epsilon = 1.0
precisely so that we wouldn’t waste time on inferior options, but, if the difference between two arms is large enough, we end up wasting ...
Get Bandit Algorithms for Website Optimization now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.