Overview Software Quality Assurance Reliability and Availability

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Reliability Engineering (Rekayasa Keandalan)
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL.
1 Exponential Distribution and Reliability Growth Models Kan Ch 8 Steve Chenoweth, RHIT Right: Wait – I always thought “exponential growth” was like this!
SMJ 4812 Project Mgmt and Maintenance Eng.
1 Software Reliability Growth Models Incorporating Fault Dependency with Various Debugging Time Lags Chin-Yu Huang, Chu-Ti Lin, Sy-Yen Kuo, Michael R.
1 Software Testing and Quality Assurance Lecture 36 – Software Quality Assurance.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
Testing Metrics Software Reliability
EEE499 Real Time Systems Software Reliability (Part II)
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
Software Process and Product Metrics
Software Testing and QA Theory and Practice (Chapter 15: Software Reliability) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Dynamic Presentation of Key Concepts Module 2 – Part 3 Meters Filename: DPKC_Mod02_Part03.ppt.
ECE355 Fall 2004Software Reliability1 ECE-355 Tutorial Jie Lian.
Chapter 22. Software Reliability Engineering (SRE)
SOFTWARE RELIABILITY MODELING
Software faults & reliability Presented by: Presented by: Pooja Jain Pooja Jain.
Software Reliability Growth. Three Questions Frequently Asked Just Prior to Release 1.Is this version of software ready for release (however “ready” is.
Software Reliability Categorising and specifying the reliability of software systems.
Pop Quiz How does fix response time and fix quality impact Customer Satisfaction? What is a Risk Exposure calculation? What’s a Scatter Diagram and why.
Software Reliability Model Deterministic Models Probabilistic Models Halstead’s software metric McCabe’s cyclomatic complexity metrix Error seeding Failure.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
ECE 355: Software Engineering
System Testing There are several steps in testing the system: –Function testing –Performance testing –Acceptance testing –Installation testing.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 15 Software Reliability
Slide 6.1 CHAPTER 6 TESTING. Slide 6.2 Overview l Quality issues l Nonexecution-based testing l Execution-based testing l What should be tested? l Testing.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
1SAS 03/ GSFC/SATC- NSWC-DD System and Software Reliability Dolores R. Wallace SRS Technologies Software Assurance Technology Center
Software Reliability SEG3202 N. El Kadri.
Chapter 6 : Software Metrics
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis.
Random Sampling, Point Estimation and Maximum Likelihood.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 23Slide 1 Chapter 23 Software Cost Estimation.
University of Sunderland CIFM03Lecture 4 1 Software Measurement and Reliability CIFM03 Lecture 4.
Boğaziçi University Software Reliability Modelling Computer Engineering Software Reliability Modelling Engin Deveci.
Ch. 1.  High-profile failures ◦ Therac 25 ◦ Denver Intl Airport ◦ Also, Patriot Missle.
Software Reliability (Lecture 13) Dr. R. Mall. Organization of this Lecture: $ Introduction. $ Reliability metrics $ Reliability growth modelling $ Statistical.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
Software Reliabilty1 Software Reliability Advanced Software Engineering COM360 University of Sunderland © 1998.
Estimating “Size” of Software There are many ways to estimate the volume or size of software. ( understanding requirements is key to this activity ) –We.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
2007 MIT BAE Systems Fall Conference: October Software Reliability Methods and Experience Dave Dwyer USA – E&IS baesystems.com.
Software Quality Assurance and Testing Fazal Rehman Shamil.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Software Reliability Estimates/ Projections, Cumulative & Instantaneous Presented by Dave Dwyer With help from: Ann Marie Neufelder, John D. Musa, Martin.
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Main Title Slide Software Reliability Estimates/ Projections, Cumulative & Instantaneous - Dave Dwyer With help from: Ann Marie Neufelder, John D. Musa,
Simulation. Types of simulation Discrete-event simulation – Used for modeling of a system as it evolves over time by a representation in which the state.
SENG521 (Fall SENG 521 Software Reliability & Testing Preparing for Test (Part 6a) Department of Electrical & Computer Engineering,
 Software reliability is the probability that software will work properly in a specified environment and for a given amount of time. Using the following.
Testing Integral part of the software development process.
A Software Cost Model with Reliability Constraint under Two Operational Scenarios Satoru UKIMOTO and Tadashi DOHI Department of Information Engineering,
Slide (Ch.22) 1 Tian: Software Quality Engineering Software Quality Engineering: Testing, Quality Assurance, and Quantifiable Improvement Jeff Tian Chapter.
Software Metrics and Reliability
OPERATING SYSTEMS CS 3502 Fall 2017
Hardware & Software Reliability
Software Project Management
Software Reliability Definition: The probability of failure-free operation of the software for a specified period of time in a specified environment.
Software Reliability PPT BY:Dr. R. Mall 7/5/2018.
Chapter 8: Inference for Proportions
Software Reliability: 2 Alternate Definitions
Software Reliability Models.
Software Reliability (Lecture 12)
T305: Digital Communications
Definitions Cumulative time to failure (T): Mean life:
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Paulo Alencar

Overview Software Quality Assurance Reliability and Availability Software Reliability Models Calendar Time, Execution Time Operational Phase Concurrent Components

Software Quality Assurance Basic definitions: Software metrics: measures to determine the degree of each quality characteristic attained by the product. Advantages: - They can support software quality in multiple ways (e.g., software complexity, testing coverage, reliability, etc.) - They can find problematic areas and bottlenecks in the software process (e.g., performance problems, etc.) - They can help the software engineers assess the quality of their work

Software Reliability Basic definitions: S/W reliability: probability that the software will not cause a failure for some specified time. Failure: divergence in expected external behavior. Fault: cause/representation of an error, i.e., a bug Error: a programmer mistake (misinterpretation of specifications?)

Software Reliability Basic question: How to estimate the growth in software reliability as its errors are being removed? Major issues: testing - (how much? When to stop!) field use ( # of trained personnel? Support staff?) S/W reliability growth models: observe past failure history and give an estimate of the future failure behavior; about 40 models have been proposed.

Reliability and Availability A simple measure of reliability can be given as: MTBF = MTTF + MTTR , where MTBF is mean time between failures MTTF is mean time to fail MTTR is mean time to repair

Reliability and Availability Availability can be defined as the probability that the system is still operating within requirements at a given point in time and can be given as: availability is more sensitive to MTTR which is an indirect measure of the maintainability of software.

Reliability and Availability Since each error in the program does not have the same failure rate, the total error count does not provide a good indication of the reliability of the system. MTBF is perhaps more useful (meaningful) than defects/KLOC since the user is concerned with failures not the total error count.

Software Reliability Models Software reliability models can be classified into many different groups; some of the more prominent (better known) groups include: error seeding - estimates the number of errors in a program. Errors are divided into indigenous errors and induced (seeded) errors. The unknown number of indigenous errors is estimated from the number of induced errors and the ratio of the two types of errors obtained from the testing data.

Software Reliability Models Reliability growth Measures and predicts the improvement of reliability through the testing process using a growth function to represent the process. Independent variables of the growth function could be time, number of test cases (or testing stages) and The dependent variables can be reliability, failure rate or cumulative number of errors detected.

Software Reliability Models Nonhomogeneous Poisson process (NHPP) provide an analytical framework for describing the software failure phenomenon during testing. the main issue is to estimate the mean value function of the cumulative number of failures experienced up to a certain point in time. a key example of this approach is the series of Musa models

Software Reliability Models A typical measure (failures per unit time) is the failure intensity (rate) given as: where  = program CPU time (in a time shared computer) or wall clock time (in an embedded system).

Software Reliability Models SR Growth models are generally “black box” - no easy way to account for a change in the operational profile Operational profile: description of the input events expected to occur in actual software operation – how it will be used in practice consequences are that we are unable to go from test to field

Software Reliability Models · Many models have been proposed, perhaps the most prominent are:          Musa Basic model Musa/Okomoto Logarithmic model Some models work better than others depending on the application area and operating characteristics: i.e. interactive? data intensive? control intensive? real-time?

Failure Intensity Reduction Concept Initial failure intensity, l0 λ: Failure intensity λ0: Initial failure intensity at start of execution μ: Average total number of failures at a given point in time v0: Total number of failures over infinite time l: failure intensity Basic Log. v0 m: Mean failures exp.

Basic Assumptions of Musa Errors in the program are independent and distributed with constant average occurrence rate. Execution time between failures is large with respect to instruction execution time. Potential ‘test space’ covers it ‘use space’. The set of inputs per test run (test or operational) is randomly selected. All failures are observed. The error causing the failure is immediately fixed or else its re-occurrence is not counted again.

Musa Basic Model Failure Intensity (FI) is the number of failures per unit time. Assume that the decrement in failure intensity (FI) function (the derivative wrt the number of expected failures) is constant. Implies that the FI is a function of average number of failures experienced at any given point in time. Reference: Musa, Iannino, Okumoto, “Software Reliability: Measurement, Prediction, Application”, McGraw-Hill, 1987.

Musa Basic Model where: 0 is the initial failure intensity at the start of execution.  is the average (expected) number of failures at any point in time. 0 is the total number of failures over infinite time. The average number of failures at any point in time is given as:

Example: Assume a program will experience 100 failures in infinite time. It has now experienced 50 failures. The initial failure intensity was 10 failures/cpu hour. The current failure intensity is: The number of failures experienced after 10 cpu hours is: For 100 hours:

Logarithmic Model The decrement per failure (of FI) becomes smaller with failures experienced (exponential decrease) - makes intuitive sense - usually observed in practice. () = 0 exp( - ) where  is the failure intensity decay parameter

Example: assume the 0 = 10 failures/cpu hour,  = 0 Example: assume the 0 = 10 failures/cpu hour,  = 0.02/failure, and that 50 failures have been experienced. The current failure intensity is: At 10 cpu hours: Smaller than the Basic model At 100 cpu hours, we have: (more failures than Basic model)

Reliability Models Basic model Logarithmic model m(t) = v0[1 – exp(-l0t/v0)] m(t) = (1/q).ln(l0qt + 1) l(t) = l0/(l0qt + 1) l(t) = l0exp(-l0t/v0) l m Log. Log. v0 Basic Basic t t

Reliability Models Examples Example: Assume that a program will experience 100 failures in infinite time. The initial failure intensity was 10 failures/CPU-hr, the present failure intensity is 3.68 failures/CPU-hour and our objective intensity is 0.000454 failure/CPU-hr. Predict the additional testing time to achieve the stated objective. Answer: We know that l(t) = l0exp(-l0t/v0) At time t1, l(t1) = l0exp(-l0t1/v0) = lp At time t2, l(t2) = l0exp(-l0t2/v0) = lf t2 - t1 = (v0/ l0).ln(lp/ lf) v0 = 100 faults, l0 = 10 failures/CPU-hr lp = 3.68 failures/CPU-hr, lf = 0.000454 failure/CPU-hr Testing time = (t2 - t1 ) = 90 CPU-hr

Choice of Model Basic Model: For studies or predictions before execution and failure data available Using study of faults to determine effects of a new software engineering technology The program size is changing continually or substantially (i.e. during integration)

Logarithmic Model System subjected to highly non-uniform operational profiles. Highly predictive validity is needed early in the execution period. The rapidly changing slope of the failure intensity during early stages can be better fitted with the logarithmic Poisson than the basic model . Basic idea: use the basic model for pretest studies and estimates and periods of evolution while switching to the logarithmic model when integration is complete.

Calendar Time Component The calendar time component attempts to relate execution time and calendar time by determining at any given time the ratio: The calendar time is obtaining by integrating this ratio with respect to execution time. The calendar time is of most concern during the test and repair phases as well to predict the dates at which given failure intensities will be achieved

The calendar time is based on a debugging process model and takes into account the following: The resources used in running the program for a given execution time and processing a specified quantity of features; The resource quantities available; The degree to which a resource can be utilized (due to possible bottlenecks) during the period in which it is limiting.

At the start of testing: a large number of failures in short time intervals; Stop testing to allow fixing of faults. As testing progresses, the intervals between faults is longer, failure correction people not filled with work the test team becomes the bottleneck and eventually the computing resources are the limiting factor.

Resource Usage Musa has shown that resource usage is linearly proportional to execution time and mean failures experienced. Let r represent the usage of resource r, then where r is the resource usage per CPU hour; r is the resource usage per failure

The following table from Musa summarizes the typical parameters Resources Usage per CPU hour Failure Availabe Planned Utilization Identification Personnel Failure Correction Computer Time

Example: A test runs test cases for 10 CPU hours and identifies 34 failures; the effort per hour of execution time is 5 person hours, each failure requires on average 2 hours to identify and verify. The total failure identification effort required (using the previous formula) is: = 5(10) + 2(34) = 118 person hr

The change in resource usage per unit of execution time, is calculated as: Since the failure intensity decreases with execution time, the effort used per hour of execution time also tends to decrease with testing, as expected. In a similar manner, other calendar time components can also be obtained from the base equation.

Operational Phase Once the software has been released and is operational, and no features are added or repairs made between releases, the failure intensity will become a constant. Both models reduce to a homogeneous Poisson processes with the failure intensity as the parameter. The failures in a given time period follow a Poisson distribution while the failure intervals follow an exponential distribution.

Operational Phase The reliability R and failure intensity  are related by: R() = exp( ) As expected, the probability of no failures for a given execution time (the reliability) is lower for longer time intervals.

Operational Phase In many cases, the operational phase consists of a series of releases results in the reliability and failure intensity to be a series of step functions; under these cases, if the releases are frequent and the failure intensity decreases, step functions can be approximated by one the previous reliability models

Operational Phase We can also apply the reliability models directly to reported failures (not counting repetitions) but that now the model reflects the case when failures have been corrected If failures corrected in the field, then the model is similar to that of the system test phase

System Reliability (Concurrent Components) Assume that we have QP components with constant failure intensities where their reliabilities are measured over a common calendar time interval and that all must function correctly for system success, then the system failure intensity is given by where k refers to the individual component failure intensities.

Software reliabilities are usually given in terms of execution time Before software and hardware reliabilities are combined, the conversion of software reliability is required. First convert the reliability R of each software component to failure intensity. Using  to represent failure intensity as being with respect to execution time, we obtain using R() = exp( ) where  is the execution time period for which the reliability was given.

Now let C be the average utilization by the program of the machine assuming that it does not vary greatly over the failure intervals, then the failure intensity with respect to clock time t is given by: Note that the average utilization is less than or equal to 1.

Once the failure intensities with respect to the reference period of clock time are obtained for all components, the resulting reliability is given by:

It is also possible to consider the situation where a software component currently running on machine 1 with instruction execution rate r1 is moved to machine 2 with instruction execution rate of r2 The failure intensity on machine 2 is given by: