Increasing the Applicability of Verification Tools for Neural Networks
Machine learning is continually being applied to increasingly complex domains. In some cases, these involve safety-critical systems (e.g., self-driving cars) or influence important financial decisions (e.g., investment recommendations). Unfortunately, mistakes or unexpected behaviors in these domains can have high cost, including large monetary losses or death. As the use of machine learning continues to expand, it is important that we are prepared to ensure these systems behave as intended. Fortunately, many verification tools have been introduced in the past few years that enable us to prove properties about the behavior of neural networks. Unfortunately, these tools are often limited in the types of networks and properties that they support. This research focuses on increasing the applicability of verification tools for neural network properties. We build on the insight that unsupported verification problems can be transformed into supported subproblems to develop rewriting rules for environment models, property specifications, and neural network models, that enable off-the-shelf verification tools to be applied to previously unsupported verification problems. Our rewriting rules will enable the application of verifiers of stateless properties of neural networks to stateful property specifications, such as properties stating that a system with a neural network controller will eventually reach a goal state. We will evaluate our approach on several neural networks with stateful properties to demonstrate its effectiveness.
- Lu Feng, Committee Chair, (CS/SEAS/UVA)
- Matt Dwyer, Co-Advisor, (CS/SEAS/UVA)
- Sebastian Elbaum, Co-Advisor, (CS/SEAS/UVA)
- Tom Fletcher (CS/SEAS/UVA)
- Homa Alemzadeh (ECE/SEAS/UVA)
- Aaron Dutle (NASA Langley Research Center)