Improving Robustness of DNN-Dependent ADS: Transforming Input Distributions to Account for Inference and System State

Abstract: 

Autonomous Driving Systems (ADSs) are becoming more advanced and ubiquitous, enabled by increasingly sophisticated deep neural networks (DNNs). As ADSs' autonomy levels rise, so does the cost and complexity of their failures. Often, these failures arise when these DNNs are less robust than expected. In studying these systems, I found that these failures can occur due to unexplored inputs from the long tail of driving scenarios or when the system evolution affects the input distribution. These circumstances are common but challenging to accommodate because their effects are difficult to anticipate and solutions may not generalize, leaving us with a brittle system. I postulate that the impact of input distribution shifts on the robustness of a DNN-dependent system can be manipulated through the careful design and encoding of transformations that account for their effects on DNN predictions and analysis of their compounding effects on system state. To overcome these threats to robustness, I have developed two types of techniques. First, I have engineered techniques to mitigate robustness-related failures when the cause is known but the effect on the DNN prediction is not, specifically for the common scenario when a sensor component used to collect the training dataset for a DNN onboard an ADS is swapped out. Second, I have extended adversarial test generation techniques, which aim to produce input perturbations that cause a DNN to compute incorrect outputs and estimate DNN robustness, to consider how the effect of perturbations is attenuated by other ADS subsystems and are less effective as ADS state evolves. However, these perturbations are often not in distribution and appear unnatural, making them easy to spot and dismantle or less likely to resemble deployment conditions. I will increase naturalness while retaining perturbation strength: constraining perturbation generation techniques to only reproduce features seen during DNN training and manipulating unconstrained perturbations with diffusion models to appear naturalistic to humans. Overall, the family of proposed techniques takes a significant step to improving ADS robustness.

Committee:  

  • Matt Dwyer, Committee Chair, CS/SEAS/UVA 
  • Sebastian Elbaum, Advisor, CS/SEAS/UVA 
  • Claire Le Goues, Software and Societal Systems Department /CS/Carnegie Mellon
  • Yen-Ling Kuo, CS/SEAS/UVA
  • Matthew Bolton, ESE/SEAS/UVA