Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

View More Papers

Ovid: Message-based Automatic Contact Tracing

Leonie Reichert and Samuel Brack (Humboldt University of Berlin); Björn Scheuermann (Humboldt-University of Berlin)

Read More

Hey Alexa, is this Skill Safe?: Taking a Closer...

Christopher Lentzsch (Ruhr-Universität Bochum), Sheel Jayesh Shah (North Carolina State University), Benjamin Andow (Google), Martin Degeling (Ruhr-Universität Bochum), Anupam Das (North Carolina State University), William Enck (North Carolina State University)

Read More

Demo #5: Securing Heavy Vehicle Diagnostics

Jeremy Daily, David Nnaji, and Ben Ettlinger (Colorado State University)

Read More

My Past Dictates my Present: Relevance, Exposure, and Influence...

Shujaat Mirza, Christina Pöpper (New York University)

Read More