Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Robust reinforcement learning has been a challenging problem due to always unknown differences between real and training environment. Existing efforts approached the problem through performing random environmental perturbations in learning process. However, one can not guarantee perturbation is positive. Bad ones might bring failures to reinforcement learning. Therefore, in this paper, we propose to utilize GAN to dynamically generate progressive perturbations at each epoch and realize curricular policy learning. Demo we implemented in unmanned CarRacing game validates the effectiveness.

View More Papers

PhantomCache: Obfuscating Cache Conflicts with Localized Randomization

Qinhan Tan (Zhejiang University), Zhihua Zeng (Zhejiang University), Kai Bu (Zhejiang University), Kui Ren (Zhejiang University)

Read More

UIScope: Accurate, Instrumentation-free, and Visible Attack Investigation for GUI...

Runqing Yang (Zhejiang University), Shiqing Ma (Rutgers University), Haitao Xu (Arizona State University), Xiangyu Zhang (Purdue University), Yan Chen (Northwestern University)

Read More

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Read More

Demo #14: In-Vehicle Communication Using Named Data Networking

Zachariah Threet (Tennessee Tech), Christos Papadopoulos (University of Memphis), Proyash Poddar (Florida International University), Alex Afanasyev (Florida International University), William Lambert (Tennessee Tech), Haley Burnell (Tennessee Tech), Sheikh Ghafoor (Tennessee Tech) and Susmit Shannigrahi (Tennessee Tech)

Read More