Title: Adversarial Attacks and Defenses in Deep Learning: An Optimization Perspective
Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to adversarial examples or adversarial attacks. A tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. Despite the fact that optimization-based methods have achieved many recent successes in adversarial attacks and defenses, many challenges remain unsolved. First, in both white-box attack and black-box attack cases, optimization-based attack algorithms usually suffer from poor time and query complexities, thereby limiting their practical usefulness. Second, even though adversarial training is currently one of the most effective adversarial defense strategies, the level of robustness is still far from satisfactory. Moreover, it usually costs a large amount of training time. The proposed research focuses on solving the above mentioned problems in adversarial attacks and defenses. By combining the proposed research tasks, we are able to provide efficient and effective optimization-based adversarial attack and defense algorithms enabling faster and more robust adversarial attacks and defenses. This will greatly boost the practicality of current adversarial attack and defense algorithms in real world scenarios including image classification, speech recognition, visual QA, image captioning, autonomous driving, etc.
Advisor: Quanquan Gu
Chair: David Evans
Other members: Hongning Wang, Yangfeng Ji
Minor representative: Farzad Faroud