Pruning Dynamic Slices With Confidence Original by: Xiangyu Zhang Neelam Gupta Rajiv Gupta The University of Arizona Presented by: David Carrillo.

Slides:



Advertisements
Similar presentations
A System to Generate Test Data and Symbolically Execute Programs Lori A. Clarke September 1976.
Advertisements

HARDWARE SOFTWARE PARTITIONING AND CO-DESIGN PRINCIPLES MADHUMITA RAMESH BABU SUDHI PROCH 1/37.
Dependence Analysis in Reduction of Requirement Based Test Suites Boris Vaysburg Luay Tahat Bogdan Korel Computer Science Department Bell Labs Innovations.
Program Slicing Mark Weiser and Precise Dynamic Slicing Algorithms Xiangyu Zhang, Rajiv Gupta & Youtao Zhang Presented by Harini Ramaprasad.
1 Program Slicing Purvi Patel. 2 Contents Introduction What is program slicing? Principle of dependences Variants of program slicing Slicing classifications.
Presented By: Krishna Balasubramanian
1 Cost Effective Dynamic Program Slicing Xiangyu Zhang Rajiv Gupta The University of Arizona.
CS590F Software Reliability What is a slice? S: …. = f (v)  Slice of v at S is the set of statements involved in computing v’s value at S. [Mark Weiser,
Dynamic Bayesian Networks (DBNs)
FIT FIT1002 Computer Programming Unit 19 Testing and Debugging.
1 S. Tallam, R. Gupta, and X. Zhang PACT 2005 Extended Whole Program Paths Sriraman Tallam Rajiv Gupta Xiangyu Zhang University of Arizona.
ISBN Chapter 3 Describing Syntax and Semantics.
Program Slicing Xiangyu Zhang. CS590F Software Reliability What is a slice? S: …. = f (v)  Slice of v at S is the set of statements involved in computing.
Pruning Dynamic Slices With Confidence Xiangyu Zhang Neelam Gupta Rajiv Gupta The University of Arizona.
02/13/20071 Indexing Noncrashing Failures: A Dynamic Program Slicing-Based Approach Chao Liu, Xiangyu Zhang, Jiawei Han, Yu Zhang, Bharat K. Bhargava University.
Dynamically Discovering Likely Program Invariants to Support Program Evolution Michael D. Ernst, Jake Cockrell, William G. Griswold, David Notkin Presented.
1 Validation and Verification of Simulation Models.
Domain Testing Based on Character String Predicate Ruilian Zhao Computer Science Dept. Beijing University of Chemical Technology Michael R. Lyu Computer.
Describing Syntax and Semantics
Automated Diagnosis of Software Configuration Errors
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
AMOST Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology.
1 Yolanda Gil Information Sciences InstituteJanuary 10, 2010 Requirements for caBIG Infrastructure to Support Semantic Workflows Yolanda.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Instructor: Peter Clarke
Bug Localization with Machine Learning Techniques Wujie Zheng
Scalable Dynamic Analysis for Automated Fault Location and Avoidance Rajiv Gupta Funded by NSF grants from CPA, CSR, & CRI programs and grants from Microsoft.
Interactive Debugging QuickZoom: A State Alteration and Inspection-based Interactive Debugger 1.
Chapter 13: Regression Testing Omar Meqdadi SE 3860 Lecture 13 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
30/10/ Iteration Loops Do While (condition is true) … Loop.
A Model for Computational Science Investigations Supercomputing Challenge
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
1 Test Selection for Result Inspection via Mining Predicate Rules Wujie Zheng
CprE 458/558: Real-Time Systems
Group 8: Denial Hess, Yun Zhang Project presentation.
Chapter 11: Dynamic Analysis Omar Meqdadi SE 3860 Lecture 11 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
References: “Pruning Dynamic Slices With Confidence’’, by X. Zhang, N. Gupta and R. Gupta (PLDI 2006). “Locating Faults Through Automated Predicate Switching’’,
Lecture Notes - Copyright © S. C. Kothari, All rights reserved.1 Efficient Debugging CPRE 556 Lecture 19.
1 A Plethora of Paths Eric Larson May 18, 2009 Seattle University.
Bug Localization with Association Rule Mining Wujie Zheng
Comparing model-based and dynamic event-extraction based GUI testing techniques : An empirical study Gigon Bae, Gregg Rothermel, Doo-Hwan Bae The Journal.
1 1 Slide Simulation Professor Ahmadi. 2 2 Slide Simulation Chapter Outline n Computer Simulation n Simulation Modeling n Random Variables and Pseudo-Random.
Program Slicing Techniques CSE 6329 Spring 2013 Parikksit Bhisay
EGR 115 Introduction to Computing for Engineers Branching & Program Design – Part 3 Friday 03 Oct 2014 EGR 115 Introduction to Computing for Engineers.
Error Explanation with Distance Metrics Authors: Alex Groce, Sagar Chaki, Daniel Kroening, and Ofer Strichman International Journal on Software Tools for.
/ PSWLAB Evidence-Based Analysis and Inferring Preconditions for Bug Detection By D. Brand, M. Buss, V. C. Sreedhar published in ICSM 2007.
Mutation Testing Breaking the application to test it.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Foundations of Software Testing Chapter 5: Test Selection, Minimization, and Prioritization for Regression Testing Last update: September 3, 2007 These.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVIII. Software Testing.
Phoenix Based Dynamic Slicing Debugging Tool Eric Cheng Lin Xu Matt Gruskin Ravi Ramaseshan Microsoft Phoenix Intern Team (Summer '06)
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
Static Slicing Static slice is the set of statements that COULD influence the value of a variable for ANY input. Construct static dependence graph Control.
Harvesting Runtime Values in Android Applications That Feature Anti-Analysis Techniques Presented by Vikraman Mohan.
Chapter 8 – Software Testing
Verification and Testing
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 6 Domain Testing
CBCD: Cloned Buggy Code Detector
Aditya P. Mathur Purdue University
SwE 455 Program Slicing.
A Survey of Program Slicing Techniques: Section 4
Software testing strategies 2
Test Case Purification for Improving Fault Localization
CS 1111 Introduction to Programming Fall 2018
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
BugHint: A Visual Debugger Based on Graph Mining
Chapter 7 Software Testing.
CHAPTER 6 Testing and Debugging.
Presentation transcript:

Pruning Dynamic Slices With Confidence Original by: Xiangyu Zhang Neelam Gupta Rajiv Gupta The University of Arizona Presented by: David Carrillo

Dynamic Slicing Dynamic slice is the set of statements that did affect the value of a variable at a program point for a specific program execution. [Korel and Laski, 1988 ] …… 10. A = … B = …… 30. P = 31. If (P<0) { A = A } 37. B=B+1 …… 40. Error(A) Dynamic Slice = {10, 30, 31, 35, 40}

Effectiveness of Dynamic Slicing Dynamic slicing is very effective in containing the faulty statement, however it usually produces over- sized slices -- [AADEBUG’05]. Problem: How to automatically prune dynamic slices? Many Approaches: This paper presents: Fine-grained pruning of a backward slice by using confidence analysis.

Types of Evidence Used in Pruning Buggy Execution output_x  Classical dynamic slicing algorithms investigate bugs through the evidence of the wrong output. input0 input_x input2 output2 predicate_x output0 output1 predicate_x  Literature contains use of many different types of evidence, this paper studies “Partially correct output”.  Benefits of more evidence Narrow the search for faulty statement. Broaden the applicability of the tool.

Fine-grained Pruning by Exploiting Correct Outputs …… 10. A = 1 (Correct: A=3) … B = A % 2 …… 30. C = A + 2 …… 40. Print (B) 41. Print (C)  Correct outputs produced in addition to wrong output.  DS(O wrong ) – DS (O correct ) are all the statements that create wrong output and not correct output. {10, 30, 41} {10, 20, 40} = {30,41}  What happens when a statement affects both correct and incorrect output?

Confidence Analysis  Value produced at node n can reach only wrong output nodesn There is no evidence that n is correct, so it should be in the pruned slice. Should we include n in the slice?? Confidence(n)=0 Confidence(n)=?; 0 ≤ ? ≤ 1  Value produced at node n can reach both the correct and wrong output nodes.nnn Confidence(n)=1  Value produced at n can reach only correct outputs There is no evidence of incorrectness of n. Therefore it cannot be in the slice.

Confidence Analysisnn Range(n)={ a, b, c, d, e, f, g } Alt(n)={ a } Value(n) = a Value(n) = b Value(n) = c, c When |Alt(n)|==1, we have the highest confidence (=1) on the correctness of n; When |Alt(n)|==|Range(n)|, we have the lowest confidence (=0). |Range(n)| >= |Alt(n)|>=1 Alt(n) is a set of possible values of the variable defined by n, that when propagated through the dynamic dependence graph, produce the same values for correct outputs. Experimentally determined function. Range(n) is all values taken by n during the buggy run.

Confidence Analysis: Example …… 10. A =... … B = A % 2 …… 30. C = A + 2 …… 40. Print (B) 41. Print (C) A+2 is a one-to-one mapping. A%2 is a one-to-many (2) mapping.

Confidence Analysis: Two Problems  How to decide the Range of values for a node n? Based on variable type (e.g., Integer). Static range analysis. Our choice:  Dynamic analysis based on value profiles (Range of values for a statement is the set of values defined by all of the execution instances of the statement during the program run).  How to compute Alt(n)? Consider the set of correct output values as constraints. Compute Alt(n) by backward propagation of constraints through the dynamic dependence subgraph corresponding to the slice.

Computing Alt(n) Along Data Dependence S1: T=...9 S2: X=T+110 S3: Y=T%30 (X,T)= (6,5) (9,8) (10,9) (T,...)= (1,...) (3,...) (5,...) (8,...) (9,...) (Y,T)= (0,3) (0,9) (1,1) (2,5) (2,8) alt(S1) = ∩ alt = {9} alt(S2)={10} alt(S3)={0,1}

Computing Alt(n) Along Control Dependence S1: if (P) …True S2: X=T+110 S3: Y=T%30 (X,T)= (6,5) (9,8) (10,9) (Y,T)= (0,3) (0,9) (1,1) (2,5) (2,8) alt(S1) = {True} alt(S2)={10} alt(S3)={0,1}

Characteristics of Siemens Suite Programs ProgramDescriptionLOCVersionsTests print_tokensLexical analyzer print_tokens2Lexical analyzer replacePattern replacement schedulePriority scheduler schedule2Priority scheduler gzipUnix utility flexUnix utility Each faulty version has a single manually injected error. All the versions are not included:  No output is produced.  Faulty statement is not contained in the backward slice. For each version three tests were selected.

Results of Pruning ProgramDSPDS max PDS max / DS PDS min %Missed by PDS min print_tokens %350% print_tokens %550% replace %4338.1% schedule %5620% schedule %500% gzip %10100% flex %250% On average, PDS max = 41.1% of DS

Confidence Based Prioritization DD – dependence distance CV – confidence values Executed statement instances examined (%) Prior work have shown that Dependence Distance is an effective way to prioritize statements in order to locate faulty code. Experimentation in this paper shows that Prioritizing by Confidence Values outperforms Dependence Distance.

The Potential of Confidence Analysis (1) Dynamic Slicer With Confidence Pruned Slices User Verified Statements as correct Buggy Code Input  Interactive Pruning. Incorporate user input into pruning.

The Potential of Confidence Analysis (2)  Relevant slicing (gzip v3 run r1) Potential dep. Data dep. Dynamic slices does not capture bugs where data dependency is incorrect due to incorrect control flow. Relevant slicing do, but generates Dynamic Slices that are too large. It may be effective with effective pruning.

Conclusions  Confidence analysis - exploits the correct output values produced in an execution to prune the dynamic slice of an incorrect output.  This novel dynamic analysis based implementation of confidence analysis effectively pruned backward dynamic slices in our experiments. Pruned Slices = 41.1% Dynamic Slices, and still contain the faulty statement.  Our study shows that confidence analysis has additional applications beyond pruning – prioritization, interactive pruning & relevant slicing.

Discussion  Creation alternatives relies on known mapping for each type of statement. i.e. X = Y + 1 is one-to-one. i.e. X = X % 3 is one-to-many. How extensible is this approach?? (floats, objects, etc.)  This approach assumes that: There is only one error (detected). The error is detected before it propagates into its dependencies. How realistic are this assumptions in real scenarios with incomplete test coverage?

The End