Event Actions
Beyond Accuracy: Building Safe Machine Learning Systems with Dual-Correct Predictions
Abstract:
In high-stakes applications, it’s not enough for Machine Learning (ML) models to simply make accurate predictions; they must also provide valid and transparent explanations that align with true causal factors. Without this, trust in ML decisions is compromised, especially in critical domains like healthcare, autonomous driving, and scientific research, where the reasoning behind predictions is as important as the predictions themselves. My research advocates the need for “dual-correct predictions,” emphasizing ML models that not only predict accurately but also do so for the right reasons. This approach is key to building Safe Machine Learning Systems (SMLS) that inspire trust, promote accountability, and empower stakeholders to make informed decisions. In this talk, I will present our recent efforts in addressing two key challenges towards SMLS: 1) How to safely generalize ML models beyond their training data? And 2) How to safeguard ML predictions with trustworthy rationales?
About the Speaker:
Dr. Xi Peng is an Assistant Professor in the Department of Computer & Information Sciences and a resident faculty at the Data Science Institute, University of Delaware. He leads the Deep Robust & Explainable AI Lab (DeepREAL), focusing on the safety and reliability of machine learning systems. His research develops foundational models, algorithms, and theories to build safe learning-enabled systems for critical domains such as science, healthcare, and autonomous systems. Dr. Peng’s work has been recognized with prestigious awards, including the NSF CAREER Award, DOD DEPSCoR Award, NIH R21 Award, Google Faculty Research Award, General University Research Award, and University of Delaware Research Foundation Award. He earned the Ph.D. degree in computer science from Rutgers University in 2018.