Software Fault Prediction using Language Processing Dave Binkley Henry Field Dawn Lawrie Maurizio Pighin Loyola College in Maryland Universita’ degli Studi.

Slides:



Advertisements
Similar presentations
Chapter 24 Quality Management.
Advertisements

Which Feature Location Technique is Better? Emily Hill, Alberto Bacchelli, Dave Binkley, Bogdan Dit, Dawn Lawrie, Rocco Oliveto.
Enhance the Attractiveness of Studies in Science and Technology WP 6: Formal Hinders Kevin Kelly Trinity College Dublin WP 6 Co-ordinator.
CLUSTERING SUPPORT FOR FAULT PREDICTION IN SOFTWARE Maria La Becca Dipartimento di Matematica e Informatica, University of Basilicata, Potenza, Italy
Diploma in Statistics Introduction to Regression Lecture 4.11 Introduction to Regression Lecture 4.2 Indicator variables for estimating seasonal effects.
“... providing timely, accurate, and useful statistics in service to U.S. agriculture.” Using Mixed Methods to Evaluate Survey Questionnaires Heather Ridolfo,
Predictor of Customer Perceived Software Quality By Haroon Malik.
Chapter 6 (p153) Predicting Future Performance Criterion-Related Validation – Kind of relation between X and Y (regression) – Degree of relation (validity.
Linear Regression Using Excel 2010 Linear Regression Using Excel ® 2010 Managerial Accounting Prepared by Diane Tanner University of North Florida Chapter.
Chapter 3 Review Two Variable Statistics Veronica Wright Christy Treekhem River Brooks.
1 Home Gas Consumption Interaction? Should there be a different slope for the relationship between Gas and Temp after insulation than before insulation?
A Survey on Text Categorization with Machine Learning Chikayama lab. Dai Saito.
1 Predicting Bugs From History Software Evolution Chapter 4: Predicting Bugs from History T. Zimmermann, N. Nagappan, A Zeller.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
What causes bugs? Joshua Sunshine. Bug taxonomy Bug components: – Fault/Defect – Error – Failure Bug categories – Post/pre release – Process stage – Hazard.
Prediction Basic concepts. Scope Prediction of:  Resources  Calendar time  Quality (or lack of quality)  Change impact  Process performance  Often.
Ch 4: Information Retrieval and Text Mining
1 On the Pertinence of the Notion of Etalon (Standard) for Software Measure Valérie Paulus, Miguel Lopez, Gregory Seront, Simon Alexandre.
SE 450 Software Processes & Product Metrics Activity Metrics.
CSE 300: Software Reliability Engineering Topics covered: Software metrics and software reliability.
1 Predictors of customer perceived software quality Paul Luo Li (ISRI – CMU) Audris Mockus (Avaya Research) Ping Zhang (Avaya Research)
Object Oriented Metrics XP project group – Saskia Schmitz.
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
State coverage: an empirical analysis based on a user study Dries Vanoverberghe, Emma Eyckmans, and Frank Piessens.
Utilising software to enhance your research Eamonn Hynes 5 th November, 2012.
1 Forecasting Field Defect Rates Using a Combined Time-based and Metrics-based Approach: a Case Study of OpenBSD Paul Luo Li Jim Herbsleb Mary Shaw Carnegie.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
American Psychology-Law Society March 2013 Portland, OR 1 Partially Specified Actuarial Tables and the Poor Performance of Static-99R Richard Wollert Ph.D.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
Digging for diamonds: Identifying valuable web automation programs in repositories Jarrod Jackson 1, Chris Scaffidi 2, Katie Stolee 2 1 Oregon State University.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
A Framework for Examning Topical Locality in Object- Oriented Software 2012 IEEE International Conference on Computer Software and Applications p
SWEN 5430 Software Metrics Slide 1 Quality Management u Managing the quality of the software process and products using Software Metrics.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 23Slide 1 Chapter 23 Software Cost Estimation.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
HAWKES LEARNING SYSTEMS Students Matter. Success Counts. Copyright © 2013 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Section 12.4.
● Final exam Wednesday, 6/10, 11:30-2:30. ● Bring your own blue books ● Closed book. Calculators and 2-page cheat sheet allowed. No cell phone/computer.
UNIVERSITAS SCIENTIARUM SZEGEDIENSIS UNIVERSITY OF SZEGED D epartment of Software Engineering New Conceptual Coupling and Cohesion Metrics for Object-Oriented.
Bayesian Extension to the Language Model for Ad Hoc Information Retrieval Hugo Zaragoza, Djoerd Hiemstra, Michael Tipping Presented by Chen Yi-Ting.
Software Engineering 2003 Jyrki Nummenmaa 1 SOFTWARE PRODUCT QUALITY Today: - Software quality - Quality Components - ”Good” software properties.
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Statistics: Unlocking the Power of Data Lock 5 STAT 101 Dr. Kari Lock Morgan Multiple Regression SECTIONS 9.2, 10.1, 10.2 Multiple explanatory variables.
1 OO Metrics-Sept2001 Principal Components of Orthogonal Object-Oriented Metrics Victor Laing SRS Information Services Software Assurance Technology Center.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Enabling Reuse-Based Software Development of Large-Scale Systems IEEE Transactions on Software Engineering, Volume 31, Issue 6, June 2005 Richard W. Selby,
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
THE IRISH SOFTWARE ENGINEERING RESEARCH CENTRELERO© What we currently know about software fault prediction: A systematic review of the fault prediction.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Software Engineering 2004 Jyrki Nummenmaa 1 SOFTWARE PRODUCT QUALITY Today: - Software quality - Quality Components - ”Good” software properties.
Evolution in Open Source Software (OSS) SEVO seminar at Simula, 16 March 2006 Software Engineering (SU) group Reidar Conradi, Andreas Røsdal, Jingyue Li.
SOFTWARE METRICS Software Metrics :Roadmap Norman E Fenton and Martin Neil Presented by Santhosh Kumar Grandai.
Software Engineering – University of Tampere, CS DepartmentJyrki Nummenmaa SOFTWARE PRODUCT QUALITY Today: - Software quality -
1 MGT 511: Hypothesis Testing and Regression Lecture 8: Framework for Multiple Regression Analysis K. Sudhir Yale SOM-EMBA.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
Bonds and bridges: The relative importance of relations with peers and faculty for college student achievement Sandra Dika, PhD Assistant Research Professor.
Statistics: Unlocking the Power of Data Lock 5 STAT 101 Dr. Kari Lock Morgan 11/20/12 Multiple Regression SECTIONS 9.2, 10.1, 10.2 Multiple explanatory.
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
Event-Based Extractive Summarization E. Filatova and V. Hatzivassiloglou Department of Computer Science Columbia University (ACL 2004)
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
IR 6 Scoring, term weighting and the vector space model.
by C.A. Conley and L. Sproull
Tom Ostrand Elaine Weyuker Bob Bell AT&T Labs – Research
Differentially Private Verification of Regression Model Results
Design Metrics Software Engineering Fall 2003
Design Metrics Software Engineering Fall 2003
Employee Turnover: Data Analysis and Exploration
Mining and Analyzing Data from Open Source Software Repository
Lecture 12: Data Wrangling
Presentation transcript:

Software Fault Prediction using Language Processing Dave Binkley Henry Field Dawn Lawrie Maurizio Pighin Loyola College in Maryland Universita’ degli Studi di Udine

What is a Fault? Problems identified in bug reports –Bugzilla Led to code change

And Fault Prediction? Fault Predictor Metrics Source code “ignore” consider Ohh look at! ……

“Old” Metrics Dozens of structure based –Lines of code –Number of attributes in a class –Cyclomatic complexity

Why YAM? (Yet Another Metric) 1.Many structural metrics bring the same value Recent example Gyimothy et al. “Empirical validation of OO metrics …” TSE 2007

Why YAM? 2. Menzies et al. “Data mining static code attributes to learn defect predictors.” TSE 2007

Why YAM? -- Diversity “ …[the] measures used … [are] less important than having a sufficient pool to choose from. Diversity in this pool is important.” Menzies et al.

SE New Diverse Metrics IR Nirvana Use natural language semantics (linguistic structure)

SE QALP -- An IR Metric QALP Nirvana

Use IR to `rate’ modules –Separate code and comments –Stop list -- ‘an’, ‘NULL’ –Stemming -- printable -> print –Identifier splitting go_spongebob -> go sponge bob –tf-idf term weighting – [ press any key ] –Cosine similarity – [ again ] What is a QALP score?

tf-idf Term Weighting Accounts for term frequency - how important the term is a document Inverse document frequency - how common in the entire collection High weight -- frequent in document but rare in collection

Cosine Similarity Football Cricket Document 2 Document 1 = COS ( )

Why the QALP Score in Fault Prediction High QALP score High Quality Low Faults (Done)

Fault Prediction Experiment Fault Predictor QALP Source code “ignore” consider Ohh look at! …… LoC / SLoC

Linear Mixed-Effects Regression Models Response variable = f ( Explanatory variables ) Faults = f ( QALP, LoC, SLoC ) In the experiment

Two Test Subjects Mozilla – open source –3M LoC 2.4M SLoC MP – proprietary source –454K LoC 282K SLoC

QALP score Envelope

Mozilla Final Model defects = f(LoC, SLoC, LoC * SLoC) –Interaction R 2 = 0.16 Omits QALP score 

MP Final Model defects = QALP( LoC SLoC) LoC SLoC R 2 = (p < )

MP Final Model defects = QALP( LoC SLoC) LoC SLoC LoC = 1.67 SLoC (paper includes quartile approximations) defects = … SLoC ► more (real) code … more defects

MP Final Model defects = QALP( LoC SLoC) LoC SLoC “ Good” when coefficient of QALP < 0 Interactions exist

Consider QALP Score Coefficient ( LoC SLoC) Again using LoC = 1.67 SLoC QALP( SLoC) Coefficient of QALP < 0

Consider QALP Score Coefficient ( LoC SLoC) Graphically

Good News! Interesting range  coefficient of QALP < 0

Ok I Buy it … Now What do I do? High LoC  more faults  Refractor longer functions Obviously improves metric value (not a sales pitch)

Ok I Buy it … Now What do I do? But, … High LoC  more faults  Join all Lines Obviously improves metric value But faults? (not a sales pitch)

But, … High QALP score  fewer faults  Add all code back in as comments - Improves score Ok I Buy it … Now What do I do?

High QALP score  fewer faults  Consider variable names in low scoring functions. Informal examples seen Ok I Buy it … Now What do I do?

Future Refractoring Advice Outward Looking Comments –Comparison with external documentation Incorporating Concept Capture –Higher quality identifiers are worth more

Summary Diversity – IR based metric Initial study provided mixed results

Question?

The Neatness metric pretty print code lower edit distance  higher score Ok I Buy it … Now What do I do?

Neatness  fewer faults (unproven)  True for a most students Ok I Buy it … Now What do I do?