Methods have limits … Matthew Dwyer Steve Goddard Sebastian Elbaum
All validation, verification, synthesis, … techniques are limited in some way –expressiveness, soundness, scalability, usability, … We don’t talk much about these limits - we should –Industry : you want to know the limits of the techniques you consume –Govt : you want to know the limits of techniques you invest in –Research : limits are an opportunity
What happens at the limit? Can’t capture problem with a technique –abstract, decompose, use multiple techniques Can’t precisely analyze system –false error reports, miss errors Can’t scale analysis to system –no answer, partial answer, incorrect answer …
Multiple techniques How do their limits line up? Can we combine techniques to reduce the gaps? Even partial solutions have tons of information –Model check out of memory - partial state space –Type state imprecision - usually confirms large parts of program as error free model check race freedom type state atomicity
Eventually we have to deploy When the last technique is finished –there may be residual unmet validation goals How will we detect and respond to errors at run time? ?