Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.

Slides:



Advertisements
Similar presentations
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Advertisements

QAAC 1 Metrics: A Path for Success Kim Mahoney, QA Manager, The Hartford
G. Alonso, D. Kossmann Systems Group
Stoimen Stoimenov QA Engineer SitefinityLeads, SitefinityTeam6 Telerik QA Academy Telerik QA Academy.
Computer Engineering 203 R Smith Project Tracking 12/ Project Tracking Why do we want to track a project? What is the projects MOV? – Why is tracking.
1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know.
Important concepts in software engineering The tools to make it easy to apply common sense!
Swami NatarajanJune 10, 2015 RIT Software Engineering Activity Metrics (book ch 4.3, 10, 11, 12)
Software Quality Engineering Roadmap
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Defect Removal Metrics
Swami NatarajanJune 13, 2015 RIT Software Engineering Project Management.
Quality Systems Frameworks
Statistics & Modeling By Yan Gao. Terms of measured data Terms used in describing data –For example: “mean of a dataset” –An objectively measurable quantity.
Introduction to Quality Engineering
Defect Removal Metrics
COMP8130 and 4130Adrian Marshall 8130 and 4130 Test Execution and Reporting Adrian Marshall.
SE 450 Software Processes & Product Metrics Reliability Engineering.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
RIT Software Engineering
SE 450 Software Processes & Product Metrics 1 Defect Removal.
SE 450 Software Processes & Product Metrics Activity Metrics.
Software Development Overview CPSC 315 – Programming Studio Spring 2009.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
EXAMPLES OF METRICS PROGRAMS
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
1 Quality Management Models Kan Ch 9 Steve Chenoweth, RHIT Right – To keep in mind – For Kan, this is part of Total Quality Management.
 What is Software Testing  Terminologies used in Software testing  Types of Testing  What is Manual Testing  Types of Manual Testing  Process that.
Software Testing and QA Theory and Practice (Chapter 15: Software Reliability) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Using A Defined and Measured Personal Software Process Watts S. Humphrey CS 5391 Article 8.
Simulation.
Kan Ch 7 Steve Chenoweth, RHIT
Software Reliability: The “Physics” of “Failure” SJSU ISE 297 Donald Kerns 7/31/00.
Software Reliability Categorising and specifying the reliability of software systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation 1.
CSCI 5801: Software Engineering
Achieving Better Reliability With Software Reliability Engineering Russel D’Souza Russel D’Souza.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 15 Software Reliability
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
CPIS 357 Software Quality & Testing
Chapter 6: Testing Overview Basic idea of testing - execute software and observe its behavior or outcome. If failure is observed, analyze execution record.
Quality Control Project Management Unit Credit Value : 4 Essential
1 Software Testing and Quality Assurance Lecture 33 – Software Quality Assurance.
1 Pop Quiz What is an Orthogonal Defect? Which is more precise: predictive models or management models? Why? Essay: Why are metrics and models an important.
West Virginia University Towards Practical Software Reliability Assessment for IV&V Projects B. Cukic, E. Gunel, H. Singh, V. Cortellessa Department of.
T Iteration Demo Team 13 I1 Iteration
Test status report Test status report is important to track the important project issues, accomplishments of the projects, pending work and milestone analysis(
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
1 Software Quality Engineering. 2 Quality Management Models –Tools for helping to monitor and manage the quality of software when it is under development.
Week 5-6 MondayTuesdayWednesdayThursdayFriday Testing III No reading Group meetings Testing IVSection ZFR due ZFR demos Progress report due Readings out.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
If you have a transaction processing system, John Meisenbacher
SENG521 (Fall SENG 521 Software Reliability & Testing Preparing for Test (Part 6a) Department of Electrical & Computer Engineering,
Chapter 4 More on Two-Variable Data. Four Corners Play a game of four corners, selecting the corner each time by rolling a die Collect the data in a table.
This has been created by QA InfoTech. Choose QA InfoTech as your Automated testing partner. Visit for more information.
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
Debugging Intermittent Issues
Dilbert Scott Adams Manage It! Your Guide to Modern, Pragmatic Project Management. Johanna Rothman.
Architecture & System Performance
Architecture & System Performance
Chapter 18 Maintaining Information Systems
Software Reliability Definition: The probability of failure-free operation of the software for a specified period of time in a specified environment.
Debugging Intermittent Issues
Software Reliability Models.
Software Reliability Engineering
CHAPTER 29: Multiple Regression*
Critical Systems Validation
Presentation transcript:

Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering

Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering Practices Define reliability objectives Use operational profiles to guide test execution –Preferable to create automated test infrastructure Track failures during system tests Use reliability growth curves to track quality of product Release when quality of product meets reliability objectives Paper on “Software reliability engineered testing” – –Fairly easy to read, good overview

Swami NatarajanJune 17, 2015 RIT Software Engineering Defining reliability objectives Quantitative targets for level of reliability (“failure intensity”: failures/hour) that makes business sense –Remember for defects it was “not discussable”? Impact of a failureFI ObjectiveMTBF 100’s deaths, >$10 9 cost ,000yrs 1-2 deaths, around $10 6 cost yrs $1,000 cost weeks $100 cost h $10 cost h $1 cost 1 1 h From John D. Musa

Swami NatarajanJune 17, 2015 RIT Software Engineering Testing based on operational profiles Done during black-box system testing Preferably mix of test cases that match operational profile If possible, create automated test harness to execute test cases –Need to run large numbers of test cases with randomized parameters for statistical validity Execute test cases in randomized order, with selection patterns matching frequencies in operational profile –Simulating actual pattern of usage

Swami NatarajanJune 17, 2015 RIT Software Engineering Builds and code integration Most large projects have periodic builds –Development team integrates a new chunk of code into the product and delivers to test team Test team does black box system testing –Identifies bugs and reports them to dev team Track pattern of defects found during system testing to see how reliability varies as development progresses –Defects found should decrease over time as bugs are removed, but each new chunk of code adds more bugs –Pattern of reliability growth curve tells us about the code being added, and whether the product code is becoming more stable Pattern can also be used to statistically predict how much more testing will be needed before desired reliability target reached –Useful predictions only after most of the code integrated and failure rates trend downward

Swami NatarajanJune 17, 2015 RIT Software Engineering Tracking failures during testing Enter data about when failures occurred during system testing into reliability tool e.g. CASRE –Plots graph of failure intensity vs. time Failure Intensity Reliability R TIME (in concept) From netserver.cerc.wvu.edu/numsse/Fall2003/691D/lec3.ppt

Swami NatarajanJune 17, 2015 RIT Software Engineering A more realistic curve From

Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Models Assumes a particular reliability model Different reliability models proposed –Differ in statistical model of how reliability varies with time –Fitting slightly different curves to the data –Built into the tool Another statistical model, the Rayleigh model, can be used to predict remaining defects My take on this: –Trying to get too mathematically precise about the exact level of reliability is not valid! –Just models, use to get a reasonable estimate of reliability

Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Metric Estimated failure intensity –(Reliability = 1 / failure intensity) –Tool shows statistical estimates of how failure intensity varies over time The curve is referred to as the “reliability growth curve” –Note that the product being tested varies over time, with fixes and new code –In-process feedback on how quality is changing over time

Swami NatarajanJune 17, 2015 RIT Software Engineering Interpreting reliability growth curves Spikes are normally associated with new code being added Larger volumes of code or more unreliable code causes bigger spikes –The curve itself tells us about the stability of the code base over time If small code changes/additions cause a big spike, the code is really poor quality or impacts many other modules heavily The code base is stabilizing when curve trends significantly downward –Release (ideally) only when curve drops below target failure intensity objective … indicates right time to stop testing –Can statistically predict how much more test effort needed before target failure intensity objective needed. Shows up “adding a big chunk of code just before release” –Common occurrence! … Getting code done just in time Note that there is definitely random variation! –Hence “confidence intervals” –Avoid reading too much into the data

Swami NatarajanJune 17, 2015 RIT Software Engineering Limitations of reliability curves Operational profiles are often “best guesses”, especially for new software The reliability models are empirical and only approximations Failure intensity objectives should really be different for different criticality levels –Results in loss of statistical validity! Automating test execution is challenging (particularly building verifier) and costly –But it does save a lot over the long run –More worthwhile when reliability needs are high Hard to read much from them till later stages of system testing … very late in the development cycle

Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Certification Another use for reliability engineering is to determine the reliability of a software product –E.g. you are evaluating web servers for your company website – reliability is a major criterion Build test suite representative of your likely usage –Put up some pages, maybe including forms –Create test suite that generates traffic –Log failures e.g. not loading, wrong data received –Track failure patterns over time Evaluate multiple products or new releases using test suite, to determine reliability –Avoids major problems and delays with poor vendor software Note that this applies the analysis to a fixed code base –Fewer problems with statistical validity

Swami NatarajanJune 17, 2015 RIT Software Engineering Example certification curve From

Swami NatarajanJune 17, 2015 RIT Software Engineering Summary Software reliability engineering is a scientific (statistical) approach to reliability Vast improvement over common current practice –“Keep testing until all our test cases run and we feel reasonably confident” Avoids under-engineering as well as over- engineering (“zero defects”) When done well, SRE adds ~1% to project cost –Musa’s numbers, my experience: ~10% for medium-sized projects if you include cost of automated testing –Note that as the number of builds and releases increases, automated testing more than pays for itself