Modern software contributes to important societal decisions, and yet we know very little about its fairness properties. Can software discriminate? Evidence of software discrimination has been found in systems that recommend criminal sentences, grant access to loans and other financial products, transcribe YouTube videos, translate text, and perform facial recognition. Nonetheless, even defining what it means for software to discriminate is a complex task. I will present recent research that defines software fairness and discrimination; develops a testing-based, causality-capturing method for measuring if and how much software discriminates; and provides provable formal guarantees on software fairness.
I will also describe open problems in software fairness and how recent advances in machine learning and natural language modeling can help address them. Overall, I will argue that enabling and ensuring software fairness requires solving research challenges across computer science, including in machine learning, software and systems engineering, human-computer interaction, and theoretical computer science.
About the Speaker:
Yuriy Brun is an associate professor with the College of Information and Computer Sciences at the University of Massachusetts Amherst. His research interests include software engineering, software fairness and bias, self-adaptive systems, and distributed systems. He received his PhD from the University of Southern California in 2008 and was a Computing Innovation postdoctoral fellow at the University of Washington until 2012. Prof. Brun is a recipient of the NSF CAREER Award in 2015, the IEEE TCSC Young Achiever in Scalable Computing Award in 2013, a Best Paper Award in 2017, two ACM SIGSOFT Distinguished Paper Awards in 2011 and 2017, a Microsoft Research Software Engineering Innovation Foundation Award in 2014, a Google Faculty Research Award in 2015, a Lilly Fellowship for Teaching Excellence in 2017, a College Outstanding Teacher Award in 2017, and an ICSE 2015 Distinguished Reviewer Award.