Noisy networks
The next improvement that we're going to check addresses another RL problem: exploration of the environment. The paper is called Noisy Networks for Exploration ([4] Fortunato and others, 2017) and has a very simple idea for learning exploration characteristics during training, instead of having a separate schedule related to the exploration.
Classical DQN achieves exploration by choosing random actions with specially defined hyperparameter epsilon, which is slowly decreased over time from 1.0 (fully random actions) to some small ratio of 0.1 or 0.02. This process works well for simple environments with short episodes, without much non-stationarity during the game, but even in such simple cases, it requires tuning to make training ...
Get Deep Reinforcement Learning Hands-On now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.