Addressing the limitations

For one, the issues raised in the preceding section are recognized and acknowledged by the research community. There are several efforts being made to address them. In the work by Pattanaik et. al., not only do the authors demonstrate that current deep reinforcement learning algorithms are susceptible to adversarial attacks, they also propose techniques that can make the same algorithms more robust toward such attacks. In particular, by training deep RL algorithms on examples that were adversarially perturbed, the model can improve its robustness against similar attacks. This technique is commonly referred to as adversarial training.

Moreover, the research community is actively taking actions to solve the reproducibility ...

Get Python Reinforcement Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.