Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

View More Papers

On Building the Data-Oblivious Virtual Environment

Tushar Jois (Johns Hopkins University), Hyun Bin Lee, Christopher Fletcher, Carl A. Gunter (University of Illinois at Urbana-Champaign)

Read More

An Analysis of First-Party Cookie Exfiltration due to CNAME...

Tongwei Ren (Worcester Polytechnic Institute), Alexander Wittmany (University of Kansas), Lorenzo De Carli (Worcester Polytechnic Institute), Drew Davidsony (University of Kansas)

Read More

As Strong As Its Weakest Link: How to Break...

Kai Li (Syracuse University), Jiaqi Chen (Syracuse University), Xianghong Liu (Syracuse University), Yuzhe Tang (Syracuse University), XiaoFeng Wang (Indiana University Bloomington), Xiapu Luo (Hong Kong Polytechnic University)

Read More

Demo #15: Remote Adversarial Attack on Automated Lane Centering

Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)

Read More