Combining everything
We've now seen all DQN improvements mentioned in the paper [1] Rainbow: Combining Improvements in Deep Reinforcement Learning. Let's combine all of them into one hybrid method. First of all, we need to define our network architecture and the three methods that have contributed to it:
- Categorical DQN: Our network will predict the value probability distribution of actions.
- Dueling DQN: Our network will have two separate paths for value of state distribution and advantage distribution. On the output, both paths will be summed together, providing the final value probability distributions for actions. To force advantage distribution to have a zero mean, we'll subtract distribution with mean advantage in every atom.
- NoisyNet: Our linear ...
Get Deep Reinforcement Learning Hands-On now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.