1 Aditya P. Mathur Head and Professor Department of Computer Science, Purdue University ABB, Sweden Monday April 7, 2008 Towards a Radically New Theory.

Slides:



Advertisements
Similar presentations
Trustworthy Service Selection and Composition CHUNG-WEI HANG MUNINDAR P. Singh A. Moini.
Advertisements

Design of Experiments Lecture I
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL.
Geog 409: Advanced Spatial Analysis & Modelling © J.M. Piwowar1Principles of Spatial Modelling.
CUSTOMER NEEDS ELICITATION FOR PRODUCT CUSTOMIZATION Yue Wang Advisor: Prof. Tseng Advanced Manufacturing Institute Hong Kong University of Science and.
Planning under Uncertainty
Copyright © 2003 Software Quality Research Laboratory Software Production Essentials Seeing Past the Buzz Words.
Foundations of Software Testing Chapter 1: Section 1.19 Coverage Principle and the Saturation Effect Aditya P. Mathur Purdue University Last update: August.
Experiments in Computer Science Mark Claypool. Introduction Some claim computer science is not an experimental science –Computers are man-made, predictable.
Software Testing and Quality Assurance
Uncertainty analysis and Model Validation.
1 Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Department.
Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Wednesday July.
Reliability / Life Cycle Cost Analysis H. Scott Matthews February 17, 2004.
1 Validation and Verification of Simulation Models.
Coverage Principle: A Mantra for Software Testing and Reliability Aditya P. Mathur Purdue University August 28, Cadence Labs, Chelmsford Last update:August.
West Virginia University A Bayesian Approach to Reliability Predication of Component Based Systems H. Singh, V. Cortellessa, B. Cukic, E. Gunel, V. Bharadwaj.
Introduction to ModelingMonte Carlo Simulation Expensive Not always practical Time consuming Impossible for all situations Can be complex Cons Pros Experience.
Lecture 11 Implementation Issues – Part 2. Monte Carlo Simulation An alternative approach to valuing embedded options is simulation Underlying model “simulates”
Software Integration and Documenting
Objectives of Multiple Regression
Xitao Fan, Ph.D. Chair Professor & Dean Faculty of Education University of Macau Designing Monte Carlo Simulation Studies.
Models for Software Reliability N. El Kadri SEG3202.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation 1.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990.
Overview G. Jogesh Babu. Probability theory Probability is all about flip of a coin Conditional probability & Bayes theorem (Bayesian analysis) Expectation,
Gaussian process modelling
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
CPIS 357 Software Quality & Testing
Software Testing (Part 2)
Uncertainty Analysis and Model “Validation” or Confidence Building.
Monte Carlo Simulation CWR 6536 Stochastic Subsurface Hydrology.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
Probabilistic Mechanism Analysis. Outline Uncertainty in mechanisms Why consider uncertainty Basics of uncertainty Probabilistic mechanism analysis Examples.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Testing Workflow In the Unified Process and Agile/Scrum processes.
West Virginia University Towards Practical Software Reliability Assessment for IV&V Projects B. Cukic, E. Gunel, H. Singh, V. Cortellessa Department of.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
Software Engineering 2 Software Testing Claire Lohr pp 413 Presented By: Feras Batarseh.
Testing, Monitoring, and Control of Internet Services Aditya P. Mathur Purdue University Friday, April 15, Washington State University, Pullman,
When to Test Less Presented by Lan Guo. Introduction (1) Methods of software testing: functional, coverage, and user-oriented Phases of software testing:
Safety Critical Systems 5 Testing T Safety Critical Systems.
Software Testing and Quality Assurance Software Quality Assurance 1.
Chap. 5 Building Valid, Credible, and Appropriately Detailed Simulation Models.
Chapter 10 Verification and Validation of Simulation Models
Question paper 1997.
Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Bayesian Travel Time Reliability
Systems Realization Laboratory The Role and Limitations of Modeling and Simulation in Systems Design Jason Aughenbaugh & Chris Paredis The Systems Realization.
Software Testing Mehwish Shafiq. Testing Testing is carried out to validate and verify the piece developed in order to give user a confidence to use reliable.
8/23/00ISSTA Comparison of Delivered Reliability of Branch, Data Flow, and Operational Testing: A Case Study Phyllis G. Frankl Yuetang Deng Polytechnic.
Stats Term Test 4 Solutions. c) d) An alternative solution is to use the probability mass function and.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Hypothesis Testing. Statistical Inference – dealing with parameter and model uncertainty  Confidence Intervals (credible intervals)  Hypothesis Tests.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Building Valid, Credible & Appropriately Detailed Simulation Models
Modelling & Simulation of Semiconductor Devices Lecture 1 & 2 Introduction to Modelling & Simulation.
Aditya P. Mathur Purdue University
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Chapter 10 Verification and Validation of Simulation Models
Ch13 Empirical Methods.
Coverage Principle: A Mantra for Software Testing and Reliability
Aditya P. Mathur Professor, Department of Computer Science,
Presentation transcript:

1 Aditya P. Mathur Head and Professor Department of Computer Science, Purdue University ABB, Sweden Monday April 7, 2008 Towards a Radically New Theory of Software Reliability

2 Reliability Probability of failure free operation in a given environment over a given time. Mean Time To Failure (MTTF) Mean Time To Disruption (MTTD) Mean Time To Restore (MTTR)

3 Claim Existing theories of software reliability simplify the problem to the extent that they (almost) maximize the uncertainty associated with the estimated software reliability.

4 Operational profile Probability distribution of usage of features and/or scenarios. Captures the usage pattern with respect to a class of customers.

5 Reliability estimation Operational profile Random or semi-random Test generation Test execution Failure/Defect data collection Reliability estimation [Uncertainty evaluation?] Decision process

6 Issues: Operational profile Variable. Becomes known only after customers have access to the product. Is a stochastic process…a moving target! Random test generation requires an oracle. Hence is generally limited to specific outcomes, e.g. crash, hang. What about an operational profile with impulse? This creates a non-differentiable probability function of the time-to-failure.

7 Issues: Failure data Should we analyze the failures? If yes then after the cause is removed then the reliability estimate is invalid. If the cause is not removed because the failure is a “minor incident” then the reliability estimate corresponds to irrelevant incidents.

8 Issues: Failure rate “That is, the failure rate, when unambiguously defined, does not have a physical reality; rather, it is a technical device, whose sole purpose is to convey the engineer’s personal opinion about the life characteristic of software.” Nozer Singpurwalla, “The failure rate of software: does it exists?”, IEEE Transactions on Reliability, vol. 44, no. 3,1995.

9 Issues: Model selection Rarely does a model fit the failure data. Model selection becomes a problem. 200 models to choose from? New ones keep arriving! Markov chain models suffer from a lack of estimate of transition probabilities. To compute these probabilities, you need to execute the application. During execution you obtain failure data. Then why proceed further with the model?

10 Issues: Markovian models Markov chain models suffer from a lack of estimate of transition probabilities. To compute these probabilities, you need to execute the application. During execution you obtain failure data. Then why proceed further with the model? C1C3C =1

11 Issues: Assumptions Software does not degrade over time; memory leak is not degradation and is not a random process; a new version is a different piece of software. Reliability estimate varies with operational profile. Different customers see different reliability. Can we not have a reliability estimate that is independent of operational profile? Can we not advertise quality based on metric that are a true representation of reliability..not with respect to a subset of features but over the entire set of features?

12 Estimating Uncertainty Estimates of software reliability must the associated with uncertainty. But how to quantify uncertainty? Entropy based approach [Katerina et al. 2002] Moments based approach [Katerina et al. 2003] Monte Carlo approach [Katerina et al. 2003] Bayesian approach [Dai et al. 2007]

13 Estimating Uncertainty Basic idea: Model the parameters as random variables. Use statistical (e.g. moments) or Simulation approaches to estimate variance. Problem: Does not correlate with likely faulty components in the program under test.

14 Sensitivity of Reliability to test adequacy Coverage low high Desirable Suspect modelUndesirable Risky Reliability Problem with existing approaches to reliability estimation.

15 Basis for an alternate approach Why not develop a theory based on coverage of testable items and test adequacy? Testable items: Variables, statements,conditions, loops, data flows, methods, classes, etc. Pros: Errors hide in testable items. Cons: Coverage of testable items is inadequate. Is it a good predictor of reliability? Yes, but only when used carefully. Let us see what happens when coverage is not used or not used carefully.

16 Saturation Effect FUNCTIONAL, DECISION, DATAFLOW AND MUTATION TESTING PROVIDE TEST ADEQUACY CRITERIA. Reliability Testing Effort True reliability (R) Estimated reliability (R’) Saturation region Mutation Dataflow Decision Functional RmRm R df RdRd RfRf R’ f R’ d R’ df R’ m tfstfs tfetfe tdstds tdetde t df s t df e tmstms tfetfe u:uncertainty u1u1 u2u2 u3u3 u4u4

17 An experiment [TeX] Tests generated randomly exercise less code than those generated using a mix of black box and white box techniques. Application: TeX. Creator: Donald Knuth. [Leath ‘92]

18 An experiment [sort utility] UNIX sort utility [DelFrate et al. 1995]

19 An experiment [coverage-reliability correlations] Unix utilities and space application [Garg MS Thesis]

20 Modeling an application OS Component Interactions Component Interactions Component Interactions ……….

21 Reliability of a component R(f)=  (covered/total), 0<  <1. Reliability, probability of correct operation, of function f based on a given finite set of testable items. Issue: How to compute  ? Approach: High correlation between coverage metrics and failures has been established via empirical studies. Such studies could provide estimate of  and its variance for different sets of testable items.

22 Reliability of a subsystem R(C)= g(R(f1), R(f2),..R(fn), R(I)) C={f1, f2,..fn} is a collection of components that collaborate with each other to provide services. Issue 1: How to compute R(I), reliability of component interactions? Issue 2: What is g ? Issue 3: Theory of systems reliability creates problems when (a) components are in a loop and (b) are dependent on each other.

23 Scalability Is the component based approach scalable? Powerful coverage measures lead to better reliability estimates whereas measurement of coverage becomes increasingly difficult as more powerful criteria are used. Solution: Use component based, incremental, approach. Estimate reliability bottom-up. No need to measure coverage of components whose reliability is known.

24 Next steps Develop component based theory of reliability. Do experimentation with large systems to investigate the applicability of the their and its effectiveness in predicting and estimating various reliability metrics. Base the new theory on existing work in software testing and reliability.

25 The Future Apple Confidence: Level 0: 1.0 Level 1: Level 2: 0.98 Boxed and embedded software with independently variable Levels of Confidence. Mackie Confidence: 0.99 Level 0: 1.0 Level 1:

26 Select References F. Del Frate, P. Garg, A. P. Mathur, and A. Pasquini. On the Correlation Between Code Coverage and Software Reliability, Proceedings of the Sixth International Symposium on Software Reliability Engineering, IEEE Press,Toulouse, France, pp , October 24-27, 1995 S. Krishnamurthy and A. P. Mathur. On the Estimation of Reliability of a Software System Using Reliabilities of its Components, Proceedings of the 8th International Symposium on Software Reliability Estimation, Albuquerque, New Mexico, November M. H. Chen. A. P. Mathur, and V. J. Rego. A Case Study To Investigate Sensitivity Of Reliability Estimates To Errors In The Operational Profile, Proceedings of the Fifth International Symposium on Software Reliability Engineering, IEEE Computer Society Press, Monterey, California, November 6-9, 1994, pp Katerina Goseva–Popstojanova and Sunil Kamavaram. Assessing Uncertainty in Reliability of Component–Based Software. Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE’03), Yuan-Shun Dai and Min Xie and Quan Long and Szu-Hui Ng. Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle, IEEE Trans. Softw. Eng.,V 33, No. 11, 2007, pp P. Garg. On code coverage and software reliability. MS Thesis. Department of Computer Science, Purdue University. May 1995.