We've now seen all DQN improvements mentioned in the paper  Rainbow: Combining Improvements in Deep Reinforcement Learning. Let's combine all of them into one hybrid method. First of all, we need to define our network architecture and the three methods that have contributed to it:
- Categorical DQN: Our network will predict the value probability distribution of actions.
- Dueling DQN: Our network will have two separate paths for value of state distribution and advantage distribution. On the output, both paths will be summed together, providing the final value probability distributions for actions. To force advantage distribution to have a zero mean, we'll subtract distribution with mean advantage in every atom.
- NoisyNet: Our linear ...