ICSE Doctoral Symposium | May 21, 2007 Adaptive Ranking Model for Ranking Code-Based Static Analysis Alerts Sarah Smith Heckman Advised by Laurie Williams.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Delta Confidential 1 5/29 – 6/6, 2001 SAP R/3 V4.6c PP Module Order Change Management(OCM)
You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Advanced Piloting Cruise Plot.
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Chapter 1 The Study of Body Function Image PowerPoint
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Fig 2.1 Chapter 2.
STATISTICS HYPOTHESES TEST (II) One-sample tests on the mean and variance Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National.
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
FDA/Industry Workshop September, 19, 2003 Johnson & Johnson Pharmaceutical Research and Development L.L.C. 1 Uses and Abuses of (Adaptive) Randomization:
UNITED NATIONS Shipment Details Report – January 2006.
Business Transaction Management Software for Application Coordination 1 Business Processes and Coordination.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Exit a Customer Chapter 8. Exit a Customer 8-2 Objectives Perform exit summary process consisting of the following steps: Review service records Close.
My Alphabet Book abcdefghijklm nopqrstuvwxyz.
0 - 0.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Addition Facts
Year 6 mental test 5 second questions
Evaluating Provider Reliability in Risk-aware Grid Brokering Iain Gourlay.
ZMQS ZMQS
Richmond House, Liverpool (1) 26 th January 2004.
REVIEW: Arthropod ID. 1. Name the subphylum. 2. Name the subphylum. 3. Name the order.
BT Wholesale October Creating your own telephone network WHOLESALE CALLS LINE ASSOCIATED.
ABC Technology Project
EU Market Situation for Eggs and Poultry Management Committee 21 June 2012.
1 Undirected Breadth First Search F A BCG DE H 2 F A BCG DE H Queue: A get Undiscovered Fringe Finished Active 0 distance from A visit(A)
VOORBLAD.
Polynomial Factor Theorem Polynomial Factor Theorem
Solving Equations How to Solve Them
1 Breadth First Search s s Undiscovered Discovered Finished Queue: s Top of queue 2 1 Shortest path from s.
BIOLOGY AUGUST 2013 OPENING ASSIGNMENTS. AUGUST 7, 2013  Question goes here!
Factor P 16 8(8-5ab) 4(d² + 4) 3rs(2r – s) 15cd(1 + 2cd) 8(4a² + 3b²)
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
Labour Force Historical Review Sandra Keys, University of Waterloo DLI OntarioTraining University of Guelph, Guelph, ON April 12, 2006.
© 2012 National Heart Foundation of Australia. Slide 2.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 1 v3.1 Module 6 Ethernet Fundamentals.
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management.
Understanding Generalist Practice, 5e, Kirst-Ashman/Hull
Chapter 5 Test Review Sections 5-1 through 5-4.
Addition 1’s to 20.
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
25 seconds left…...
Januar MDMDFSSMDMDFSSS
Determining How Costs Behave
Week 1.
Analyzing Genes and Genomes
We will resume in: 25 Minutes.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Intracellular Compartments and Transport
PSSA Preparation.
Essential Cell Biology
Simple Linear Regression Analysis
Energy Generation in Mitochondria and Chlorplasts
1 A Systematic Review of Cross- vs. Within-Company Cost Estimation Studies Barbara Kitchenham Emilia Mendes Guilherme Travassos.
ESEM | October 9, 2008 On Establishing a Benchmark for Evaluating Static Analysis Prioritization and Classification Techniques Sarah Heckman and Laurie.
A Comparative Evaluation of Static Analysis Actionable Alert Identification Techniques Sarah Heckman and Laurie Williams Department of Computer Science.
ISSRE 2006 | November 10, 2006 Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006.
Expediting Programmer AWAREness of Anomalous Code Sarah E. Smith Laurie Williams Jun Xu November 11, 2005.
Presentation transcript:

ICSE Doctoral Symposium | May 21, 2007 Adaptive Ranking Model for Ranking Code-Based Static Analysis Alerts Sarah Smith Heckman Advised by Laurie Williams Department of Computer Science North Carolina State University

ICSE Doctoral Symposium | May 21, Contents Motivation Research Objective Related Work Adaptive Ranking Model –Key Concepts –Research Theories –Alert Ranking Factors –Limitations Research Methodology Current Research Results Future Work

ICSE Doctoral Symposium | May 21, Motivation Developers tend to make the same mistakes that lead to software faults Automated static analysis (ASA) is useful for finding indications (alerts) of these recurring mistakes ASA generates a high number of false positives (FP)

ICSE Doctoral Symposium | May 21, Research Objective To create and validate an adaptive ranking model to rank alerts generated from automated static analysis by the likelihood a static analysis alert is an indication of a fault in the system. Increase the number of alerts that indicate a fault at the top of an alert listing.

ICSE Doctoral Symposium | May 21, Related Work Kremenek et al. [1] –Adaptive feedback on ASA alerts –Use location of alert in code Kim and Ernst [2] –Investigate the lifetime of an ASA alert Brun and Ernst [3] –Use machine learning to rank program properties by the likelihood they will reveal faults –Learning from fixes Boogerd and Moonen [4] –Prioritize alerts by execution likelihood

ICSE Doctoral Symposium | May 21, Adaptive Ranking Model (ARM) ARM Static Analysis Alerts Developer Feedback Historical Data True Positive (1) False Positive (-1) Undeterminable (0)

ICSE Doctoral Symposium | May 21, Key ARM Concepts Alert Suppression – an explicit action on the part of the developer to remove an alert from the listing –Alert does not indicate a fault that exists in the source code – false positive (FP) –Developer chooses not to fix the alert Alert Closure – an alert is no longer identified by ASA –A fault was fixed – true positive (TP) –Configuration change

ICSE Doctoral Symposium | May 21, Research Theories The ARM improves the fault detection rate of automated static analysis when compared to other ranking or ordering techniques. Each of the ranking factors in the ARM contribute in predicting the likelihood an alert is an indication of a fault.

ICSE Doctoral Symposium | May 21, Current Alert Ranking Factors Alert Type Accuracy (ATA): the likelihood an alert is an indication of a fault based on developer feedback and historical data about an alert type Code Locality (CL): the likelihood an alert is an indication of a fault based on the developer feedback about the alerts location in the source code

ICSE Doctoral Symposium | May 21, Alert Ranking Calculation Ranking(α) = (β ATA ATA(α)) + (β CL CL(α)) where β ATA + β CL = 1 Ranking is the weighted average of the ranking factors, ATA and CL Currently, β ATA = β CL = 0.5, implying each factor contributes equally to the ranking

ICSE Doctoral Symposium | May 21, Adjustment Factor The adjustment factor (AF) represents the homogeneity of an alert population of based on developer feedback An alert population (p) is a subset of reported alerts that share some characteristic AF p (α) = (#closed p - #suppressed p ) / (#closed p + #suppressed p )

ICSE Doctoral Symposium | May 21, Alert Type Accuracy ATA is the weighted average of: –Historical data from observed TP rates (initial ATA value – τ type ) –Actions a developer has taken to suppress and close alerts (AF ATA ) ATA(α) = (x τ type ) + (y AF type (α)) where x + y = 1 Currently, x = y = 0.5, implying each factor contributes equally to the ranking Ranking(α) = (β ATA ATA(α)) + (β CL CL(α))

ICSE Doctoral Symposium | May 21, Code Locality CL is the weighted average of the actions a developer has taken to suppress and close alerts at the method (m), class (c), and source folder (s) locations CL(α) = (β m AF m (α)) + (β c AF c (α)) +(β s AF s (α)) where β m + β c + β s = 1 Currently, β m = 0.575, β c = 0.34, and β s = [1] Ranking(α) = (β ATA ATA(α)) + (β CL CL(α))

ICSE Doctoral Symposium | May 21, Limitations –The initial ranking values for the current version of the ARM are provided through literature Some alert types have no initial value Alert types were summarized to a parent alert type –All coefficients are set to default values –Rely on the assumptions that alert populations are likely to be homogeneous

ICSE Doctoral Symposium | May 21, Research Methodology Developing and evolving the ARM via a literature search, intellectual work, and formative case studies –Comparison of the fault detection rate [5] for the ARM with other static analysis alert detection techniques –Investigate contribution of current factors to the ranking –Consider other factors to improve the ranking

ICSE Doctoral Symposium | May 21, Current Research Results Case Study –iTrust Web-based, role-based medical records application Developed as part of undergraduate software engineering class and graduate testing and reliability class –ARM v0.4 –FindBugs static analyzer –1964 LOC source, 3903 LOC test –163 alerts, 27 true positives

ICSE Doctoral Symposium | May 21, Research Questions Q1: Can the ranking of static analysis alerts by the likelihood an alert is a fault improve the rate of fault detection by automated static analysis? Q2: What are the contributions of each of the alert ranking factors (in the form of homogeneity of ranking populations) in predicting the likelihood an alert is a fault?

ICSE Doctoral Symposium | May 21, Fault Detection Rates

ICSE Doctoral Symposium | May 21, Ranking Factor Contributions Population# Pops# TP# FP# Mixed Alert Type Method Class Source Folder 5032

ICSE Doctoral Symposium | May 21, Research Results ARM was able to find 81.5% (22) of the true positive alerts after investigating 20% (33) of the generated alerts –Random found 22.2% (6) true positive alerts –Eclipse found 30% (8) true positive alerts However, the remaining alerts were more difficult to find due to non-homogeneous populations

ICSE Doctoral Symposium | May 21, Future Work Research Theories –The ARM improves the fault detection rate of automated static analysis when compared to other ranking or ordering techniques. –Each of the ranking factors in the ARM contribute in predicting the likelihood an alert is an indication of a fault. Future Work –Continue evaluating ARM on open source and industrial applications –Incorporate additional factors into the ARM –Investigate the contribution of each factor and the effect of the coefficients on the overall ranking of ASA alerts –Improve the ranking of non-homogeneous populations

ICSE Doctoral Symposium | May 21, References [1] T. Kremenek, K. Ashcraft, J. Yang, and D. Engler, "Correlation Exploitation in Error Ranking," in 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Newport Beach, CA, USA, 2004, pp [2] S. Kim and M. D. Ernst, "Prioritizing Warning Categories by Analyzing Software History," in International Workshop on Mining Software Repositories, to appear, Minneapolis, MN, USA, [3] Y. Brun and M. D. Ernst, "Finding Latent Code Errors via Machine Learning Over Program Executions," in 26th International Conference on Software Engineering, Edinburgh, Scotland, 2004, pp [4] C. Boogerd and L. Moonen, "Prioritizing Software Inspection Results using Static Profiling," in 6th IEEE Workshop on Source Code Analysis and Manipulation, Philadelphia, PA, USA, 2006, pp [5] G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, "Prioritizing Test Cases For Regression Testing," IEEE Transactions on Software Engineering, vol. 27, no. 10, pp , October 2001.

ICSE Doctoral Symposium | May 21, Questions? Sarah Heckman: