Alternating Conditional Analysis
We rely on safety-critical software, so judging its correctness is important. If a pacemaker exhibits buggy behavior, just how buggy is it? Will a patch to some bug also patch other program paths that lead to the same bug? How could we guarantee this? In this dissertation we develop a novel meta-analysis framework that generates more informative program correctness proofs by combining results from an algorithmically diverse set of program analyzers. To safely combine information from overapproximate and underapproximate program analyzers, we define the concept of a program interval, which encodes these two kinds of information in a way that can be shared with other analyzers. To compute this program interval, we employ multiple program analyzers as black boxes that can exchange analysis results, such that the results from one analyzer can condition another to avoid reanalyzing some part of the program. We alternate between the guarantees of different analyzers to construct a program interval, and we define a generalization mechanism to ensure convergence. We evaluate this framework on a set of C benchmarks and a case study and find that program intervals can be computed in an efficient, effective, and safe manner. We use program intervals to improve on the state-of-the-art in quantitative program analysis in providing probabilistic guarantees for safety-critical software standards. We explore how a diversity of analyzers is used to construct program intervals and employ the framework to perform modular analyses.
- Mary Lou Soffa, Committee Chair, (CS/SEAS/UVA)
- Matthew Dwyer, Advisor, (CS/SEAS/UVA)
- Sebastian Elbaum, (CS/SEAS/UVA)
- Homa Alemzadeh (ECE/SEAS/UVA)
- ThanhVu Nguyen (CS, University of Nebraska-Lincoln)