Increasing the Applicability of Verification Tools for Neural Networks
Machine learning is continually being applied to increasingly complex domains. In some cases, these involve safety-critical systems (e.g., self-driving cars) or influence important financial decisions (e.g., investment recommendations). Unfortunately, mistakes or unexpected behaviors in these domains can have high cost, including large monetary losses or death. As the use of machine learning continues to expand, it is important that we are prepared to ensure these systems behave as intended. Fortunately, many verification and falsification tools have been introduced in the past few years that enable us to prove properties about the behavior of neural networks. Unfortunately, these tools are often limited in the types of networks and properties that they support. Instead of focusing on the development of new verifiers or falsifiers, we instead develop techniques to increase the applicability of existing verification tools for neural network properties by building on the insight that unsupported verification problems can be rewritten into supported ones. We develop rewriting rules for property specifications, neural networks, and environment models, that enable existing off-the-shelf verification tools to be applied to previously unsupported verification problems. In general, we introduce rewritings which transform problems into sets of subproblems that are supported by a given verifier, and for which results can be mapped back to the original verification problem. We evaluate our rewriting rules across a wide selection of benchmarks and show that our techniques significantly increase the applicability of both verifiers and falsifiers.
- Lu Feng, Committee Chair, (CS, ESE/SEAS/UVA)
- Matthew Dwyer, Co-Advisor, (CS/SEAS/UVA)
- Sebastian Elbaum, Co-Advisor, (CS/SEAS/UVA)
- Homa Alemzadeh (ECE, CS/SEAS/UVA)
- Tom Fletcher (ECE, CS/SEAS/UVA)
- Aaron Dutle (Langley Research Center, NASA)