What does the future hold for AI and privacy?
Abstract:
The modern advancement of science and technology has seen the birth of AI. With the increased trust people have in their devices and machines, they have also granted them access to more information and data. Not everyone is comfortable with this, and, as it turns out, not every machine may be completely trustworthy, either. While most agree that any distrust or skepticism towards AI is unfounded, significant research has been conducted to address such concerns. Ignoring these vulnerabilities would be negligent when developing a master plan for the future.
The concern about privacy attacks on machine learning models boils down to a simple question: "What are the risks of private information being exposed during AI training or inference?" That signals the need for people who build AI algorithms to account for this from their early stages of development.
This talk highlights challenges and opportunities for trustworthy AI with a focus on privacy attacks and countermeasures. Moreover, we will explore inference attacks against machine learning models and frameworks (e.g., federated learning), and set out the requirements for privacy-preserving AI systems.
About the Speaker:
Giuseppe Ateniese is a Professor, Eminent Scholar in Cybersecurity and CCI Faculty Fellow in the Department of Computer Science and the Department of Cyber Security Engineering at George Mason University. He was Farber Endowed Chair in Computer Science and Department Chair at Stevens Institute of Technology. In addition, he was with Sapienza-University of Rome (Italy), Assistant/Associate Professor at Johns Hopkins University (USA), and one of the JHU Information Security Institute founders. He was a researcher at IBM Zurich Research lab (Switzerland) and scientist at the Information Sciences Institute of the University of Southern California (USA). He also briefly worked as visiting professor at Microsoft in Redmond (USA). He received the NSF CAREER Award for his research in privacy and security, and the Google Faculty Research Award, the IBM Faculty Award, and the IEEE CISTC Technical Recognition Award for his research on cloud security. He has contributed to areas such as proxy re-cryptography, anonymous communication, two-party computation, secure storage, and provable data possession. He is currently working on privacy-preserving machine learning and decentralized secure computing based on the blockchain technology.