MetriCon 2.0 Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail Michael Gegick, Laurie Williams North Carolina.

Slides:



Advertisements
Similar presentations
ICSE Doctoral Symposium | May 21, 2007 Adaptive Ranking Model for Ranking Code-Based Static Analysis Alerts Sarah Smith Heckman Advised by Laurie Williams.
Advertisements

Testing and Quality Assurance
Early Effort Estimation of Business Data-processing Enhancements CS 689 November 30, 2000 By Kurt Detamore.
Engineering Secure Software. The Power of Source Code  White box testing Testers have intimate knowledge of the specifications, design, Often done by.
Software Reliability Engineering
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Case Studies Instructor Paulo Alencar.
1 Predicting Bugs From History Software Evolution Chapter 4: Predicting Bugs from History T. Zimmermann, N. Nagappan, A Zeller.
Metrics. Basili’s work Basili et al., A Validation of Object- Oriented Design Metrics As Quality Indicators, IEEE TSE Vol. 22, No. 10, Oct. 96 Predictors.
Mining Metrics to Predict Component Failures Nachiappan Nagappan, Microsoft Research Thomas Ball, Microsoft Research Andreas Zeller, Saarland University.
Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.
What causes bugs? Joshua Sunshine. Bug taxonomy Bug components: – Fault/Defect – Error – Failure Bug categories – Post/pre release – Process stage – Hazard.
Analysis of CK Metrics “Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults” Yuming Zhou and Hareton Leung,
Prediction Basic concepts. Scope Prediction of:  Resources  Calendar time  Quality (or lack of quality)  Change impact  Process performance  Often.
Software Quality Metrics
Reliability and Software metrics Done by: Tayeb El Alaoui Software Engineering II Course.
Security Engineering II. Problem Sources 1.Requirements definitions, omissions, and mistakes 2.System design flaws 3.Hardware implementation flaws, such.
1 Finding Predictors of Field Defects for Open Source Software Systems in Commonly Available Data Sources: a Case Study of OpenBSD Paul Luo Li Jim Herbsleb.
Ugo Giordano Tutor: Prof. Stefano Russo XXIX Cycle - I year presentation Cloud systems overload management.
Software Process and Product Metrics
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Expediting Programmer AWAREness of Anomalous Code Sarah E. Smith Laurie Williams Jun Xu November 11, 2005.
Evaluating Predictive Models of Software Quality Vincenzo Ciaschini, Marco Canaparo, Elisabetta Ronchieri, Davide Salomoni INFN CNAF, Bologna, Italy CHEP.
1 Prediction of Software Reliability Using Neural Network and Fuzzy Logic Professor David Rine Seminar Notes.
CSCI 5801: Software Engineering
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
Chidamber & Kemerer Suite of Metrics
Does Distributed Development Affect Software Quality???? An Empirical Case Study of Windows Vista Christian Bird, Premkumar Devanbu, Harald Gall, Brendan.
Validation Metrics. Metrics are Needed to Answer the Following Questions How much time is required to find bugs, fix them, and verify that they are fixed?
Testing – A Methodology of Science and Art. Agenda To show, A global Test Process which work Like a solution Black Box for an Software Implementation.
CSCE 548 Secure Software Development Test 1 Review.
Software Metrics - Data Collection What is good data? Are they correct? Are they accurate? Are they appropriately precise? Are they consist? Are they associated.
CSCE 548 Code Review. CSCE Farkas2 Reading This lecture: – McGraw: Chapter 4 – Recommended: Best Practices for Peer Code Review,
Software is:  Computer programs, procedures, and possibly associated documentation and data relates to the operation of a computer system. [IEEE_Std_ ]
Chapter 6 : Software Metrics
Security of Open Source Web Applications Maureen Doyle, James Walden Northern Kentucky University Students: Grant Welch, Michael Whelan Acknowledgements:
1 POP Quiz T/F Defect Removal Effectiveness and Defect Removal Models are not true Predictive Models Define DRE What is a Checklist? What is it for? What.
Software Measurement & Metrics
Software Engineering Experimentation Software Metrics Jeff Offutt
Software Metrics and Reliability. Definitions According to ANSI, “ Software Reliability is defined as the probability of failure – free software operation.
THE IRISH SOFTWARE ENGINEERING RESEARCH CENTRELERO© What we currently know about software fault prediction: A systematic review of the fault prediction.
Measuring the Structural Quality of Business Applications 2011 Agile Conference111 Bill Curtis Jay Sappidi Jitendra Subramanyam presenter :林賢勁 1.
Maureen Doyle, James Walden Northern Kentucky University Students: Grant Welch, Michael Whelan Acknowledgements: Dhanuja Kasturiratna.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi.
Applicability Analysis of Software Testing for Actual Operating Railway Software Jong-Gyu Hwang 1, Hyun-Jeong Jo 1, Baek-Hyun Kim 1, Jong-Hyun Baek 1 1.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Software Project Management Lecture # 3. Outline Metrics for Process and Projects  Introduction  Software Metrics Process metrics Project metrics Direct.
Presented by Lu Xiao Drexel University Quantifying Architectural Debt.
CS223: Software Engineering Lecture 21: Unit Testing Metric.
By Ramesh Mannava.  Overview  Introduction  10 secure software engineering topics  Agile development with security development activities  Conclusion.
DevCOP: A Software Certificate Management System for Eclipse Mark Sherriff and Laurie Williams North Carolina State University ISSRE ’06 November 10, 2006.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
UC Marco Vieira University of Coimbra
ISQB Software Testing Section Meeting 10 Dec 2012.
Tool Support for Testing
CSCE 548 Secure Software Development Risk-Based Security Testing
Tom Ostrand Elaine Weyuker Bob Bell AT&T Labs – Research
Software Economics Phase Yield
CSCE 548 Secure Software Development Test 1 Review
Hugbúnaðarverkefni 2 - Static Analysis
Predict Failures with Developer Networks and Social Network Analysis
Exploring Complexity Metrics as Indicators of Software Vulnerability
White Box testing & Inspections
© Oxford University Press All rights reserved.
Undetectable Error in Independent Dual Programming Validation
Presentation transcript:

MetriCon 2.0 Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail Michael Gegick, Laurie Williams North Carolina State University 7 August 2007

2 Introducing Security Parallels Fault-prone component Likely to contain faults Failure-prone component Likely to have failures in field Component – any logical part of the software system [1] Make informed risk management decisions and prioritize redesign, inspection, and testing efforts on components. Reliability context (well-established) Security context (new) Vulnerability-prone component Likely to contain vulnerabilities Attack-prone component Likely to be exploited in the field [1] IEEE, "ANSI/IEEE Standard Glossary of Software Engineering Terminology (IEEE Std )," Los Alamitos, CA: IEEE Computer Society Press, 1990.

3 Early Reliability Metrics Static analysis –N. Nagappan and T. Ball, "Static Analysis Tools as Early Indicators of Pre-release Defect Density," in International Conference on Software Engineering, St. Louis, MO, 2005, pp –J. Zheng, L. Williams, W. Snipes, N. Nagappan, J. Hudepohl, and M. Vouk, "On the Value of Static Analysis Tools for Fault Detection," IEEE Transactions on Software Engineering, vol. 32, pp , Complexity metrics –J. Munson and T. Khoshgoftaar, "The Detection of Fault-Prone Programs," IEEE Transactions on Software Engineering, vol. 18, pp , –T. Khoshgoftaar and J. Munson, "Predicting Software Development Errors using Software Complexity Metrics," IEEE Journal on Selected Areas in Communications, vol. 8, pp , Historical (failure) –N. Nagappan, T. Ball, and A. Zeller, "Mining metrics to predict component failures," in International Conference on Software Engineering, Shanghai, China, –T. J. Ostrand, E. J. Weyuker, and R. M. Bell, "Where the bugs are," in International Symposium on Software Testing and Analysis, Boston, Massachusetts, 2004, pp Object-Oriented metrics –V. Basili, L. Briand, and W. Melo, "A Validation of Object Oriented Design Metrics as Quality Indicators," IEEE Transactions on Software Engineering, vol. 21, –Y. Zhou and L. Hareton, "Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults," IEEE Transactions on Software Engineering, vol. 32, no. 10, 2006, pp

4 Research Objective Build and validate models for predicting vulnerability- and attack-prone components based upon security- based automated static analyzer (ASA) alerts –Metric: ASA alert density and severity – early in the development phase –ASA cannot find all types of security vulnerabilities Are ASA alerts a good predictor? –Implementation bugs, design flaws, operational vulnerabilities –Software engineers plug the number of security alerts into the predictive models to determine which components are vulnerability- and attack- prone.

5 Building the Initial Predictive Model Generalized linear model (data are not normally distributed) Poisson distribution? mean number vulnerabilities in component estimated intercept estimated slope value of random variable – alert density of component

6 Feasibility Study Fortify Software’s Source Code Analyzer (SCA) Scanned ten releases of Sendmail – –996 total files scanned 21 potential vulnerabilities –Vulnerabilities reported in RELEASE_NOTES Nine vulnerabilities with known exploits

7 Feasibility Study – vulnerability-prone Poisson distribution –Models the response data Reported vulnerability Association between Hot alert density and number of vulnerabilities per reported per file –Positive slope  positive association between alerts and reported vulnerabilities –p-value  high significance in association Standard error  substantial overdispersion –Few data points Slopep-valueChi-Square /df Goodness- of- fit measure Standard error

8 Feasibility Study – attack-prone Poisson distribution –Models the response data Number of known exploits (nine) for a Sendmail file Association between Hot alert density and number of known exploits –Slope  positive association between alerts and exploits p-value  low significance –Standard error  substantial overdispersion »Few data points Slopep-valueChi-Square /df Goodness- of- fit measure Standard error

9 Questions Thank you!