Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Wednesday July.

Slides:



Advertisements
Similar presentations
Trustworthy Service Selection and Composition CHUNG-WEI HANG MUNINDAR P. Singh A. Moini.
Advertisements

Omitted Variable Bias Methods of Economic Investigation Lecture 7 1.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
Decision Making: An Introduction 1. 2 Decision Making Decision Making is a process of choosing among two or more alternative courses of action for the.
Reliable System Design 2011 by: Amir M. Rahmani
Planning under Uncertainty
Foundations of Software Testing Chapter 1: Section 1.19 Coverage Principle and the Saturation Effect Aditya P. Mathur Purdue University Last update: August.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software Testing and Quality Assurance Lecture 36 – Software Quality Assurance.
Chapter 7 Sampling and Sampling Distributions
Software Testing and Quality Assurance
Evaluating Hypotheses
1 Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Department.
Dependability Evaluation. Techniques for Dependability Evaluation The dependability evaluation of a system can be carried out either:  experimentally.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Simulation Models as a Research Method Professor Alexander Settles.
1 Validation and Verification of Simulation Models.
Coverage Principle: A Mantra for Software Testing and Reliability Aditya P. Mathur Purdue University August 28, Cadence Labs, Chelmsford Last update:August.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 9 Functional Testing
Software Integration and Documenting
Objectives of Multiple Regression
Models for Software Reliability N. El Kadri SEG3202.
1 Aditya P. Mathur Head and Professor Department of Computer Science, Purdue University ABB, Sweden Monday April 7, 2008 Towards a Radically New Theory.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Sampling Theory Determining the distribution of Sample statistics.
CMSC 345 Fall 2000 Unit Testing. The testing process.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
1 OM2, Supplementary Ch. D Simulation ©2010 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Network Administration Research and Analysis Week-7.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Ch. 1.  High-profile failures ◦ Therac 25 ◦ Denver Intl Airport ◦ Also, Patriot Missle.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
Software Engineering 2 Software Testing Claire Lohr pp 413 Presented By: Feras Batarseh.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Testing, Monitoring, and Control of Internet Services Aditya P. Mathur Purdue University Friday, April 15, Washington State University, Pullman,
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
Chapter 10 Verification and Validation of Simulation Models
Question paper 1997.
Center for Reliability Engineering Integrating Software into PRA B. Li, M. Li, A. Sinha, Y. Wei, C. Smidts Presented by Bin Li Center for Reliability Engineering.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
1 Software Architecture in Practice Quality attributes (The amputated version)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Statistics What is the probability that 7 heads will be observed in 10 tosses of a fair coin? This is a ________ problem. Have probabilities on a fundamental.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Testing Internet Services Sudipto Ghosh Sambhrama Mundkur Aditya P. Mathur: PI Ramkumar Natarajan Baskar Sridharan Department of Computer Sciences Purdue.
Sampling and Statistical Analysis for Decision Making A. A. Elimam College of Business San Francisco State University.
Aditya P. Mathur Professor Department of Computer Science Purdue University, West Lafayette Wednesday January 19, 2011 Capsules, Micropayments, and the.
Stats Term Test 4 Solutions. c) d) An alternative solution is to use the probability mass function and.
Topics Semester I Descriptive statistics Time series Semester II Sampling Statistical Inference: Estimation, Hypothesis testing Relationships, casual models.
Chapter 13 Understanding research results: statistical inference.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Foundations of Software Testing Chapter 7: Test Adequacy Measurement and Enhancement Using Mutation Last update: September 3, 2007 These slides are copyrighted.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Building Valid, Credible & Appropriately Detailed Simulation Models
Foundations of Software Testing Chapter 7: Test Adequacy Measurement and Enhancement Using Mutation Last update: September 3, 2007 These slides are copyrighted.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Aditya P. Mathur Purdue University
Software Testing Day 1: Preliminaries
Software Reliability Models.
Coverage Principle: A Mantra for Software Testing and Reliability
Aditya P. Mathur Professor, Department of Computer Science,
© Oxford University Press All rights reserved.
Berlin Chen Department of Computer Science & Information Engineering
Presentation transcript:

Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Wednesday July 26, WA, USA. Why the existing theory of software reliability must be discarded..and what should replace it?

Reliability Probability of failure free operation in a given environment over a given time. Mean Time To Failure (MTTF) Mean Time To Disruption (MTTD) Mean Time To Restore (MTTR)

Operational profile Probability distribution of usage of features and/or scenarios. Captures the usage pattern with respect to a class of customers.

Reliability estimation Operational profile Random or semi-random Test generation Test execution Failure data collection Reliability estimation Decision process

Issues: Operational profile Variable. Becomes known only after customers have access to the product. Is a stochastic process…a moving target! Random test generation requires an oracle. Hence is generally limited to specific outcomes, e.g. crash, hang.

Issues: Failure data Should we analyze the failures? If yes then after the cause is removed then the reliability estimate is invalid. If the cause is not removed because the failure is a “minor incident” then the reliability estimate corresponds to irrelevant incidents.

Issues: Model selection Rarely does a model fit the failure data. Model selection becomes a problem. 200 models to choose from? New ones keep arriving! More research papers! Markov chain models suffer from a lack of estimate of transition probabilities. To compute these probabilities, you need to execute the application. During execution you obtain failure data. Then why proceed further with the model?

Issues: Markovian models Markov chain models suffer from a lack of estimate of transition probabilities. To compute these probabilities, you need to execute the application. During execution you obtain failure data. Then why proceed further with the model? C1C3C =1

Issues: Assumptions Software does not degrade over time; memory leak is not degradation and is not a random process; a new version is a different piece of software. Reliability estimate varies with operational profile. Different customers see different reliability. Can we not have a reliability estimate that is independent of operational profile? Can we not advertise quality based on metric that are a true representation of reliability..not with respect to a subset of features but over the entire set of features?

Sensitivity of Reliability to test adequacy Coverage low high Desirable Suspect modelUndesirable Risky Reliability Problem with existing approaches to reliability estimation.

Basis for an alternate approach Why not develop a theory based on coverage of testable items and test adequacy? Testable items: Variables, statements,conditions, loops, data flows, methods, classes, etc. Pros: Errors hide in testable items. Cons: Coverage of testable items is inadequate. Is it a good predictor of reliability? Yes, but only when used carefully. Let us see what happens when coverage is not used or not used carefully.

Saturation Effect FUNCTIONAL, DECISION, DATAFLOW AND MUTATION TESTING PROVIDE TEST ADEQUACY CRITERIA. Reliability Testing Effort True reliability (R) Estimated reliability (R’) Saturation region Mutation Dataflow Decision Functional RmRm R df RdRd RfRf R’ f R’ d R’ df R’ m tfstfs tfetfe tdstds tdetde t df s t df e tmstms tfetfe

Modeling an application OS Component Interactions Component Interactions Component Interactions ……….

Reliability of a component R(f)=  (covered/total), 0<  <1. Reliability, probability of correct operation, of function f based on a given finite set of testable items. Issue: How to compute  ? Approach: Empirical studies provide estimate of  and its variance for different sets of testable items.

Reliability of a subsystem R(C)= g(R(f1), R(f2),..R(fn), R(I)) C={f1, f2,..fn} is a collection of components that collaborate with each other to provide services. Issue 1: How to compute R(I), reliability of component interactions? Issue 2: What is g ? Issue 3: Theory of systems reliability creates problems when (a) components are in a loop and (b) are dependent on each other.

Scalability Is the component based approach scalable? Powerful coverage measures lead to better reliability estimates whereas measurement of coverage becomes increasingly difficult as more powerful criteria are used. Solution: Use component based, incremental, approach. Estimate reliability bottom-up. No need to measure coverage of components whose reliability is known.

Next steps Develop component based theory of reliability. Do experimentation with large systems to investigate the applicability of the their and its effectiveness in predicting and estimating various reliability metrics. Base the new theory on existing work in software testing and reliability.