Download presentation
Presentation is loading. Please wait.
Published byKimberly Hall Modified over 11 years ago
1
March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro UniMiB
2
2 Program 09:15 – 09:45 Software Upgrade Checking Using Interpolation-based Function Summaries Ondrej Sery 09:45 – 10:30 Finding Races in Evolving Concurrent Programs Through Check-in Driven Analysis Alastair Donaldson 14:00 – 14:45 Empirical analysis of Evolution of Vulnerabilities Fabio Massacci 11:45 – 12:30 Regression Verification for Multi-Threaded Programs Ofer Strichman 11:00 – 11:45 Sendoff: Leveraging and Extending Program Verification Techniques for Comparing Programs Shuvendu K. Lahiri 16:00 – 16:45 Automated Continuous Evolutionary Testing Peter M. Kruse 14:45 – 15:30 Testing Evolving Software Alex Orso, coffee lunch coffee
3
3 Motivation: Challenges of validation of evolving software Large software systems are usually built incrementally: Maintenance (fixing errors and flaws, hardware changes, etc.) Enhancements (new functionality, improved efficiency, extension, new regulations, etc.) Changes are done frequently during the lifetime of most systems and can introduce new software errors or expose old errors Upgrades are done gradually, so the old and new versions have to co- exist in the same system Changes often require re-certification of the system, especially for mission-critical systems "Upgrading a networked system is similar to upgrading software of a car while the car's engine is running, and the car is moving on a highway. Unfortunately, in networked systems we don't have the option of shutting the whole system down while we upgrade and verify a part of it. source: ABB
4
4 What does it mean to validate a change in a software system? Equivalence checking – when the new version should be equivalent to the previous version in terms of functionality Changes in the underlying hardware Optimizations No crashes – when several versions need to co-exist in the same system, and we need to ensure that the update will not crash the system When there is no correctness specification, this is often the only thing we can check Checking that a specific bug was fixed A counterexample trace can be viewed as a specification of a behavior that needs to be eliminated in the new version Validation of the new functionality If a correctness specification for the change exists, we can check whether the new (or changed) behaviors satisfy this specification
5
5 Why is it validation of evolving software different from standard software validation? Software systems are too large to be formally verified or exhaustively tested at once Even if it is feasible to validate the whole system, often the process is too long and expensive and does not fit into the schedule of small frequent changes When validating the whole system, there is a danger of overlooking the change How can we use the fact that we are validating evolving software? If the previous version was validated in some way, we can assume that it is correct and not re-validate the parts that were not changed If the results of previous validation exist, we can use them as a basis for the current validation – especially useful when there are many versions that differ from each other only slightly The previous version can be used as a specification
6
6 PINCETTE Project – Validating Changes and Upgrades in Networked Software Checking for crushes Static Analysis Component Black box testing Dynamic Analysis Component Front end Methodology book Using function summaries Verifying only the change White box testing
7
7 PINCETTE: exchange of information between static analysis and dynamic analysis techniques Using static slicer as a preprocessing step to the dynamic analysis tools The slicer reduces the size of the program so that only the parts relevant to the change remain The resulting slice is then extended to an executable program Specification mining: obtaining candidate assertions from dynamic analysis and using them in static analysis Static Analysis Component Dynamic Analysis Component
8
Slicing procedure Program Control Flow Graph (CFG) Prog. Dep. Graph (PDG) 2 5 8 6 14 12 10 15 9 11 13 16 7 ©Ajitha Rajan, Oxford
9
Forward Slicing from Changes Compute the nodes corresponding to changed statements in the PDG, and Compute a transitive closure over all forward dependencies (control + data) from these nodes. Backward Slicing from Assertions Identify the assertions to be rechecked after the changes Compute a transitive closure of backward dependencies (control +data) from these assertions + ©Ajitha Rajan, Oxford
10
Example Depth first traversal from Node b = -a; int main() { int a, b; if (a>=0) b = a; else b = -a; assert(b >= 0); return 0; } Backward Slice ©Ajitha Rajan, Oxford b=-a assert(b>=0) Forward Slice b=-a assert(b>=0) b=a If (a>=0) b=-a assert(b>=0) b=a If (a>=0) int a,b return 0 Control Dep. Data Dep. PDG
11
Slicing procedure Program GOTO Program goto-cc Control Flow Graph (CFG) Prog. Dep. Graph (PDG) Forward Slice Backward Slice ©Ajitha Rajan, Oxford
12
Slicing procedure Program GOTO Program goto-cc Control Flow Graph (CFG) Prog. Dep. Graph (PDG) Forward Slice Backward Slice Merged Slice ©Ajitha Rajan, Oxford
13
Slicing procedure Program GOTO Program goto-cc Control Flow Graph (CFG) Prog. Dep. Graph (PDG) Forward Slice Backward Slice Merged Slice Residual Nodes and edges Program Slice executable ©Ajitha Rajan, Oxford
14
Static Pre-Pruning 14 Static Slicer Dynamic Analyser Constrain Inputs... ©Ajitha Rajan, Oxford
15
15 Dynamically Discovering Assertions to Support Formal Verification Motivation: Gray-box components (such as OTS components) – poor specifications, partial view of internal details Lack of specification complicates validation and debugging Lack of description of the correct behavior complicates integration Idea: Analyze gray-box components by dynamic analysis techniques: Monitor system executions by observing interactions at the component interface level and inside components Derive models of the expected behavior from the observed events Mark the model violations as symptoms of faults ©Leonardo Mariani, UniMiB
16
16 Dynamically Discovering Assertions at BCT Combining dynamic analysis and model-based monitoring Combining classic dynamic analysis techniques (Daikon) with incremental finite state generation techniques (kBehavior) to produce I/O models and interaction models FSA are produced and refined based on subsequent executions Extracting information about likely causes of failures by automatically relating the detected anomalies Filtering false positives in two steps: Identify and eliminate false positives by comparing failing and successful executions with heuristics already experienced in other contexts Rank the remaining anomalies according to their mutual correlation and use this information to push the related likely false positives far from the top anomalies ©Leonardo Mariani, UniMiB
17
17 User in the Middle Strategy Dynamic Analyser System Under Test Executions Static Analysis user candidate assertions ©Leonardo Mariani, UniMiB true assertions approved assertions Static Analysis upgrade Dynamic Analysis true assertions (no user intervention)
18
18 PINCETTE Project – Validating Changes and Upgrades in Networked Software Checking for crushes Static Analysis Component Black box testing Dynamic Analysis Component Front end Methodology book Using function summaries Verifying only the change Concolic testing Next talk
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.