Ryan Lekivetz & Joseph Morgan, JMP

Slides:



Advertisements
Similar presentations
Glitches & Hazards.
Advertisements

Regression Methodology Einat Ravid. Regression Testing - Definition  The selective retesting of a hardware system that has been modified to ensure that.
IPOG: A General Strategy for T-Way Software Testing
{ Dominion - Test Plan Version 1 – 22 nd Apr Aravind Palanisami.
©GoldSim Technology Group LLC., 2012 Optimization in GoldSim Jason Lillywhite and Ryan Roper June 2012 Webinar.
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Finite State Machine Minimization Advanced Methods based on triangular table and binate covering.
Software Testing and Quality Assurance
Rick Kuhn Computer Security Division
The Marriage of Market Basket Analysis to Predictive Modeling Sanford Gayle.
Parameterizing Random Test Data According to Equivalence Classes Chris Murphy, Gail Kaiser, Marta Arias Columbia University.
1 Software Testing and Quality Assurance Lecture 30 – Testing Systems.
1 Software Testing and Quality Assurance Lecture 27 – Testing State Transition Diagrams.
Protocol Analysis/Testing Based on Sidhu et al in IEEE TSE 89 and TN 93 Figures from the papers.
Automated Combinatorial Testing for Software Rick Kuhn and Raghu Kacker National Institute of Standards and Technology Gaithersburg, MD.
Engr. M. Fahad Khan Lecturer Software Engineering Department University Of Engineering & Technology Taxila.
The OWASP Foundation OWASP AppSec DC October This is a work of the U.S. Government and is not subject to copyright protection.
By Ian Jackman Davit Stepanyan.  User executed untested code.  The order in which statements were meant to be executed are different than the order.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Discussing environment. INTRODUCTION ● Course Overview ● Course Objectives.
A PRELIMINARY EMPIRICAL ASSESSMENT OF SIMILARITY FOR COMBINATORIAL INTERACTION TESTING OF SOFTWARE PRODUCT LINES Stefan Fischer Roberto E. Lopez-Herrejon.
Chapter 27 Network Management Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
SQL IMPLEMENTATION & ADMINISTRATION Indexing & Views.
A Primer on Running Deterministic Experiments
2015 JMP Discovery Summit, San Diego
SQL Server Statistics and its relationship with Query Optimizer
© 2002, Cisco Systems, Inc. All rights reserved.
Software Testing.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Project Management: Messages
Robert Anderson SAS JMP
Regression Testing with its types
Overview Theory of Program Testing Goodenough and Gerhart’s Theory
Ryan Lekivetz JMP Division of SAS Abstract Covering Arrays
Software Testing An Introduction.
Combinatorial Testing
Software Fault Interactions and Implications for Software Testing
Effective Test Design Using Covering Arrays
Your way of connecting to the internet
Software Testing.
IAEA E-learning Program
Created by Kamila zhakupova
Security Engineering.
Introduction to Networks
White-Box Testing.
REDCap Data Migration from CSV file
Applied Software Implementation & Testing
UNIT-4 BLACKBOX AND WHITEBOX TESTING
A test technique is a recipe these tasks that will reveal something
White-Box Testing.
Different Testing Methodology
Software Engineering Lecture #13.
Unit 6: Application Development
Software testing.
IPOG: A General Strategy for T-Way Software Testing
Hazard-free Karnaugh Map Minimisation
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Recall: ROM example Here are three functions, V2V1V0, implemented with an 8 x 3 ROM. Blue crosses (X) indicate connections between decoder outputs and.
Automating and Validating Edits
Interaction Modeling Extracted from textbook:
Regression Testing.
ECE 352 Digital System Fundamentals
ECE 352 Digital System Fundamentals
Automatic Test Generation for N-way Combinatorial Testing
Joining Your Data to a Map
Enabling Prediction of Performance
Session Abstract This session will provide an overview of the latest improvements and enhancements made to the Ed-Fi ODS/API in 2016, as well as a preview.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Testing.
Presentation transcript:

Ryan Lekivetz & Joseph Morgan, JMP Validating systems with covering arrays and what to do when you get a failure Ryan Lekivetz & Joseph Morgan, JMP Abstract ABSTRACT JMP 12 introduced the first JMP Pro platform for Design of Experiments (DOE) with the Covering Array platform. Covering Arrays may be used to validate complex deterministic systems, as well as configurations of such systems, by revealing if any combination of system inputs, or any combination of configurations, induce a failure. However, the discovery of a failure does not necessarily indicate which level combination precipitated the failure. In this talk, we provide an overview of Covering Arrays and discuss how the Covering Array analysis tool that was also introduced in JMP12 can help identify the underlying cause of any discovered failures.  Covering Arrays Analysis Examples Final Thoughts References Template provided by ePosterBoards

Ryan Lekivetz & Joseph Morgan, JMP Validating systems with covering arrays and what to do when you get a failure Ryan Lekivetz & Joseph Morgan, JMP Abstract COVERING ARRAYS Deterministic system (same input gives same output every time) Failures more likely induced by small number of factors Definition: A covering array CA(N; t, (v1*v2…*vk)) is an N x k array such that the i’th column contains vi distinct symbols. If for any t coordinate projection, all combinations of symbols exist, then it is a t-covering array. If N is minimal for fixed t, k, and (v1…vk), it is said to be optimal. For a set of factors, a t-covering array has the property that for any subset of t factors, every possible level combination occurs at least once. Covering arrays ensure that we’ve covered all t-way combinations (and more!) Very economical with number of tests required vs. exhaustive testing Test economy also means that when a failure occurs, we’re not sure of the cause Covering Arrays Analysis Examples Final Thoughts References Template provided by ePosterBoards

Ryan Lekivetz & Joseph Morgan, JMP Validating systems with covering arrays and what to do when you get a failure Ryan Lekivetz & Joseph Morgan, JMP Abstract ANALYSIS If we have one or more failures, it’s natural to want to determine the cause (and fix it!) After recording success/failure for each test, create a list of potential causes JMP can generate this list automatically for us Potential causes: For a test that contains a failure, many possible combinations of factors exist in that test row Can eliminate the combinations we’ve seen in tests that weren’t failures Covering Arrays Analysis Examples Final Thoughts References Template provided by ePosterBoards

Ryan Lekivetz & Joseph Morgan, JMP Validating systems with covering arrays and what to do when you get a failure Ryan Lekivetz & Joseph Morgan, JMP JMP Installation Categorical Platform Abstract OS/License/Installer 2 failures found Variety of preferences Many possible tests 2-coverage can be done in 16 runs Covering Arrays Analysis Examples DOE setup / data table Preferences for platform Final Thoughts References Analysis results Analysis results M. Drake, Discovery Summit Europe 2016 Template provided by ePosterBoards

Ryan Lekivetz & Joseph Morgan, JMP Validating systems with covering arrays and what to do when you get a failure Ryan Lekivetz & Joseph Morgan, JMP Abstract FINAL THOUGHTS Effective means of validating complex systems If all tests pass, ascertain that there is no fault due to t or few factors If we do find failures, analysis can guide us towards the cause Analysis can also accommodate designs with disallowed combinations Covering Arrays Analysis Examples Final Thoughts References D. Kuhn, D. R. Wallace, & A. M. Gallo, “Software Fault Interactions & Implications for Software Testing,” IEEE TSE, 30(6), 2004, 418-421. M. J. Drake. “Effective Test Design Using Covering Arrays.” Poster presented at Discovery Summit Europe 2016. References Template provided by ePosterBoards

Testing Preferences Categorical platform preference set has lots of checkboxes (2-level), as well as a few options with 3 or more choices 712,483,534,798,848 possible combinations to test Covering array starts with a 20-run design for strength 2 In a few seconds, Optimize finds a 16-run design, which is the minimum run size

Of the 946 (44 Choose 2) two-factor combinations in the 1 failure run, reduced to 12 possibilities.

Effectiveness of Covering Arrays #factors involved in failure Medical devices Browser Server NASA GSFC 1 66 29 42 68 2 97 76 70 93 3 99 95 89 98 4 100 96 5 6 Cumulative percentage of faults triggered by interactions involving number of factors indicated in leftmost column (Kuhn et al. (2004)). D. Kuhn, D. R. Wallace, & A. M. Gallo, “Software Fault Interactions & Implications for Software Testing,” IEEE TSE, 30(6), 2004, 418-421