Backdoor Attacks against Machine Learning Systems and Countermeasures
Abstract: The data-hungry nature of modern machine learning models forces practitioners to increasingly outsource the creation and collection of training data, which opens doors for malicious outsiders to control the behaviors of learned models through manipulating the training data. In this talk, I will discuss an important class of attacks on machine learning systems—backdoor attacks, where an attacker manipulates a dataset so that the learned model will classify any test input that contains a trigger as an attacker-chosen target label. I will discuss a series of our work to understand vulnerabilities to backdoor attacks and develop the countermeasures.
About the Speaker: Ruoxi Jia is an assistant professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. She earned her PhD in the EECS Department from UC Berkeley in 2018 and a B.S. from Peking University in 2013. Jia's research interest lies broadly in the span of machine learning, security, privacy, and cyber-physical systems. Jia's recent work focuses on data-centric and trustworthy machine learning. Ruoxi is the recipient of the Chiang Fellowship for Graduate Scholars in Manufacturing and Engineering, the 8108 Alumni Fellowship, the Okamatsu Fellowship, Virginia’s Commonwealth Cyber Initiative award, Cisco Research Awards, and Amazon Research Awards. She was selected for the Rising Stars in the EECS program in 2017. Ruoxi’s work has been featured in multiple media outlets.