Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Software Engineering: Software Testing COMP 3702

Similar presentations


Presentation on theme: "Advanced Software Engineering: Software Testing COMP 3702"— Presentation transcript:

1 Advanced Software Engineering: Software Testing COMP 3702
Instructor: Anneliese Andrews Nya saker: Ny bok Ändrade labbar Mer fokus kring labbarna (inlärningsfrågor som ska svaras på) Betyg på projektet (påverkar tentan)

2 A Andrews - Software Engineering: Software Testing'06
News & Project News Updated course program Reading instructions The book, deadline 23/3 Project IMPORTANT to read the project description thoroughly Schedule, Deadlines, Activities Requirements (7-10 papers), project areas Report, template, presentation A Andrews - Software Engineering: Software Testing'06

3 A Andrews - Software Engineering: Software Testing'06
Lecture Chapter 4 (Lab 1) Black-box testing techniques Chapter 12 (Lab 2) Statistical testing Usage modelling Reliability A Andrews - Software Engineering: Software Testing'06

4 A Andrews - Software Engineering: Software Testing'06
Why test techniques? Exhaustive testing (use of all possible inputs and conditions) is impractical must use a subset of all possible test cases want must have high probability of detecting faults Need processes that help us selecting test cases Different people – equal probability to detect faults Effective testing – detect more faults Focus attention on specific types of fault Know you’re testing the right thing Efficient testing – detect faults with less effort Avoid duplication Systematic techniques are measurable A Andrews - Software Engineering: Software Testing'06

5 A Andrews - Software Engineering: Software Testing'06
Dimensions of testing Testing combines techniques that focus on Testers – who does the testing Coverage – what gets tested Potential problems – why you're testing (risks / quality) Activities – how you test Evaluation – how to tell whether the test passed or failed All testing should involve all five dimensions Testing standards (e.g. IEEE) A Andrews - Software Engineering: Software Testing'06

6 A Andrews - Software Engineering: Software Testing'06
Black-box testing A Andrews - Software Engineering: Software Testing'06

7 Equivalence partitioning
mouse picks on menu Partitioning is based on input conditions output format requests user queries responses to prompts numerical data We analyse equivalent classes for input. A systematic defect testing technique is to identify all equivalence partitions which must be handled by a program. Test cases should be then be designed for inputs or outputs to fall within these partitions This approach enables us to select a reasonable number of test cases which we could expect to identify a high proportion of system defects The input data to a program usually fall into a number of different classes. These classes have comon characteristics e.g. Positive Numbers Negative Numbers Strings with blanks Strings without blanks Valid inputs Invalid inputs Programs normally behave in a comparable way for all members of a class Because of this equivalent behavior, these classes are frequently known as Equivalence Partitions or domains Output equivalence partitions also exist i.e.classes of related output data command key input A Andrews - Software Engineering: Software Testing'06

8 Equivalence partitioning
Input condition: is a range one valid and two invalid classes are defined requires a specific value is a boolean one valid and one invalid class are defined A Andrews - Software Engineering: Software Testing'06

9 A Andrews - Software Engineering: Software Testing'06
Test Cases Which test cases have the best chance of successfully uncovering faults? as near to the mid-point of the partition as possible the boundaries of the partition and Mid-point of a partition typically represents the “typical values” Boundary values represent the atypical or unusual values Usually identify equivalence partitions based on specs and experience Developers tend to concentrate on getting the system to work for the typical values more so than for the boundary values, and so, testing in this fashion will frequently identify errors A Andrews - Software Engineering: Software Testing'06

10 Equivalence Partitioning Example
Consider a system specification which states that a program will accept between 4 and 10 input values (inclusive), where the input values must be 5 digit integers greater than or equal to 10000 What are the equivalence partitions? A Andrews - Software Engineering: Software Testing'06

11 Example Equivalence Partitions
Partition system inputs into ‘equivalence sets’ If the number of input values is between 4 and 10 (inclusive) the equivalence partitions are <4 values, 4<= (number of values) <= 10, >10 values Choose test cases at the relevant boundaries and mid-point i.e. 3,4,7,10,11 If input is a 5-digit integer between 10,000 and 99,999, equivalence partitions are <10,000, 10,000-99, 999 and >99,999 Choose test cases at the boundaries and relevant mid-point of these partitions 09999, 10000, 50000, 99999, A Andrews - Software Engineering: Software Testing'06

12 Boundary value analysis
user queries numerical data output format requests responses to prompts command key input mouse picks on menu output domain For reasons yet unclear certain error tend to stay in proximity of the valid and invalid data boundaries. Complement of equivalence partitioning; rather than staying inside the equivalence class, we go to the edges of the class. We refer also to output conditions A systematic way to choose test cases and data values Input or output data with common properties Choose one test case on each side of the boundary BVA leads to a selection of test cases that exercise boundary values. BVA complements equivalence partitioning A greater number of errors tend to occur at the boundaries of the input domain than in the “centre” Rather than selecting an element of an equivalence class ,BVA leads to the selection of test cases at the “edges” of the class. Rather than focusing solely on input conditions ,BVA derives test cases from the output domain as well. A Andrews - Software Engineering: Software Testing'06

13 Boundary value analysis
Range a..b  a, b, just above a, just below b Number of values: max, min, just below min, just above max Output bounds should be checked Boundaries of externally visible data structures shall be checked (e.g. arrays) A Andrews - Software Engineering: Software Testing'06

14 Some other black-box techniques
Risk-based testing, random testing Stress testing, performance testing Cause-and-effect graphing State-transition testing A Andrews - Software Engineering: Software Testing'06

15 A Andrews - Software Engineering: Software Testing'06
Error guessing Exploratory testing, happy testing, ... Always worth including Can detect some failures that systematic techniques miss Consider Past failures (fault models) Intuition Experience Brain storming ”What is the craziest thing we can do?” Lists in literature A Andrews - Software Engineering: Software Testing'06

16 A Andrews - Software Engineering: Software Testing'06
Usability testing Characteristics Accessibility Responsiveness Efficiency Comprehensibility Environments Free form tasks Procedure scripts Paper screens Mock-ups Field trial A Andrews - Software Engineering: Software Testing'06

17 Specification-based testing
Formal method Test cases derived from a (formal) specification (requirements or design) Specification Model (state chart) Test case generation Test execution A Andrews - Software Engineering: Software Testing'06

18 A Andrews - Software Engineering: Software Testing'06
Model-based Testing Model Usage Requirements Specification VALIDATION Top-level Design Integration The validation phase is important for the product development: the validation cost can exceed the development cost on project project delivered with defects generates additional costs MaTeLo technique is based on: Statistical usage testing Markov model theory Detailed Design Unit Test Test phase Coding A Andrews - Software Engineering: Software Testing'06

19 Statistical testing / Usage based testing
Usage specification Test case generation Test execution Failure logging Reliability estimation Sampling from future intended usage Test cases representative for intended use Enables reliability prediction A Andrews - Software Engineering: Software Testing'06

20 Usage specification models
Algorithmic models Grammar model State hierarchy model <test_case> ::= <command> <select>; <no_commands> ::= ( <unif_int>(0,2) [prob(0.9)] | <unif_int>(3,5) [prob(0.1)]); <command> ::=(<up> [prob(0.5)] | <down> [prob(0.5)]); A Andrews - Software Engineering: Software Testing'06

21 Usage specification models
Domain based models Operational profile Markov model An operational profile is a set of test data whose frequency matches the actual frequency of these inputs from ‘normal’ usage of the system. A close match with actual usage is necessary otherwise the measured reliability will not be reflected in the actual usage of the system Can be generated from real data collected from an existing system or (more often) depends on assumptions made about the pattern of usage of a system A Andrews - Software Engineering: Software Testing'06

22 A Andrews - Software Engineering: Software Testing'06
Operational profiles A Andrews - Software Engineering: Software Testing'06

23 A Andrews - Software Engineering: Software Testing'06
Operational profiles A Andrews - Software Engineering: Software Testing'06

24 Statistical testing / Usage-based testing
Usage model Random sample Code A Andrews - Software Engineering: Software Testing'06

25 A Andrews - Software Engineering: Software Testing'06
Usage Modelling Invoke Each transition corresponds to an external event Probabilities are set according to the future use of the system Reliability prediction Click on OK with non-valid hour Right-click Move Main Window Dialog Box Resize When the real values are unknown, default values can be set: equal distribution on the model minor modifications from an existing distribution Metrics validate correctness of probabilities: Stationary distribution Test case average length Visiting probability The time to write the model is between 0.5 to 5 days per KLOC depending of the complexity of the specification The time to test the system depends of the quality of the system before testing the kind of application Test cases are random walks through paths of the model beginning in the not invoked state and ending with the first occurrence of the terminated state. Each time a state or arc is traversed then the corresponding label or data dictionary entry is logged in the test case. CANCEL or OK with Valid Hour Close Window Terminate A Andrews - Software Engineering: Software Testing'06

26 A Andrews - Software Engineering: Software Testing'06
Markov model N1 N2 N3 N4 P12 P13 P14 P34 P24 P31 P41 P21 System states, seen as nodes Probabilities of transitions Conditions for a Markov model: Probabilities are constants No memory of past states Transition matrix To Node N1 N2 N3 N4 From Node P11 P12 P13 P14 P21 P22 P24 P31 P33 P34 P41 P44 Reliability: probability that a program gives correct output with a typical set of input data from user environment Reliability depends on user profile Non-executed code have no influence on output Little used modules might be less important for reliability of the system Measure reliability Possible to measure for what modules increasing reliability will affect reliability of the system most Can use a more effective testing strategy Critical modules that have been shown to be reliable should be avoided changing Not all bugs are equally costly A Andrews - Software Engineering: Software Testing'06

27 A Andrews - Software Engineering: Software Testing'06
Model of a program The program is seen as a graph One entry node (invoke) and one exit node (terminate) Every transition from node Ni to node Nj has a probability of Pij If no connection between Ni and Nj, then Pij= 0 P21 N1 P12 N2 Input F P24 P31 P14 P13 Output P34 N4 N3 A Andrews - Software Engineering: Software Testing'06

28 A Andrews - Software Engineering: Software Testing'06
Clock Software Example to illustrate the decomposition process and model construction A Andrews - Software Engineering: Software Testing'06

29 Input Domain – Subpopulations
Human users – keystrokes, mouse clicks System clock – time/date input Combination usage - time/date changes from the OS while the clock is executing Create one Markov chain to model the input from the user A Andrews - Software Engineering: Software Testing'06

30 Operation modes of the clock
Window = {main window, change window, info window} Setting = {analog, digital} Display = {all, clock only} Cursor = {time, date, none} An operational mode is a formal characterization (specifically a set) of the status of one or more internal data objects that affect system behavior. Two modes a and b are distinguished by the Property of User Choice if mode a should produce different input choices than mode b. Two modes a and b are distinguished by the Property of External Behavior if the system should produce a different output in mode a than in mode b given the same user input. Two modes a and b are distinguished by the Property of Input Likelihood if input i has a different probability of being applied in mode a than in mode b. Window – which window has the focus-User Choice Setting – format of the clock-External and User Display – whether the full window or the clock face -External and User Cursor – which input field the cursor presently occupies-User Choice A Andrews - Software Engineering: Software Testing'06

31 A Andrews - Software Engineering: Software Testing'06
State of the system A state of the system under test is an element of the set S, where S is the cross product of the operational modes. States of the clock {main window, analog, all, none} {main window, analog, clock-only, none} {main window, digital, all, none} {main window, digital, clock-only, none} {change window, analog, all, time} {change window, analog, all, date} {change window, digital, all, time} {change window, digital, all, date} {info window, analog, all, none} {info window, digital, all, none} Remove the impossible combinations A Andrews - Software Engineering: Software Testing'06

32 A Andrews - Software Engineering: Software Testing'06
Top Level Markov Chain Window operational mode is chosen as the primary modeling mode Although this model describes the clock at a very high level of abstraction, it encompasses the behavior from invocation of the software to termination and can be used to generate a sample simply by assigning a probability distribution to the set of outgoing arcs at each state Rules for Markov chains Each arc is assigned a probability between 0 and 1 inclusive, The sum of the exit arc probabilities from each state is exactly 1. A Andrews - Software Engineering: Software Testing'06

33 Top Level Model – Data Dictionary
Arc Label Input to be Applied Comments/Notes for Tester invoke Invoke the clock software Main window displayed in full Tester should verify window appearance, setting, and that it accepts no illegal input options.change Select the “Change Time/Date...” item from the “Options” menu All window features must be displayed in order to execute this command The change window should appear and be given the focus Tester should verify window appearance and modality and ensure that it accepts no illegal input options.info Select the “Info...” item from the “Options” menu The title bar must be on to apply this input The info window should appear and be given the focus options.exit Select the “Exit” option from the “Options” menu The software will terminate, end of test case end Choose any action and return to the main window The change window will disappear and the main window will be given the focus ok Press the ok button on the info window The info window will disappear and the main window will be given the focus A Andrews - Software Engineering: Software Testing'06

34 A Andrews - Software Engineering: Software Testing'06
Level 2 Markov Chain Submodel for the Main Window A Andrews - Software Engineering: Software Testing'06

35 A Andrews - Software Engineering: Software Testing'06
Data Dictionary – Level 2 Arc Label Input to be Applied Comments/Notes invoke Invoke the clock software Main window displayed in full Invocation may require that the software be calibrated by issuing either an options.analog or an options.digital input Tester should verify window appearance, setting, and ensure that it accepts no illegal input options.change Select the “Change Time/Date...” item from the “Options” menu All window features must be displayed in order to execute this command The change window should appear and be given the focus Tester should verify window appearance and modality and ensure that it accepts no illegal input options.info Select the “Info...” item from the “Options” menu The title bar must be on to apply this input The info window should appear and be given the focus options.exit Select the “Exit” option from the “Options” menu The software will terminate, end of test case end Choose any action (cancel or change the time/date) and return to the main window The change window will disappear and the main window will be given the focus Note: this action may require that the software be calibrated by issuing either an options.analog or an options.digital input A Andrews - Software Engineering: Software Testing'06

36 A Andrews - Software Engineering: Software Testing'06
Data Dictionary – Level 2 Arc Label Input to be Applied Comments/Notes ok Press the ok button on the info window The info window will disappear and the main window will be given the focus Note: this action may require that the software be calibrated by issuing either an options.analog or an options.digital input options.analog Select the “Analog” item from the “Options” menu The digital display should be replaced by an analog display options.digital Select the “Digital” item from the “Options” menu The analog display could be replaced by a digital display options.clock-only Select the “Clock Only” item from the “Options” menu The clock window should be replace by a display containing only the face of the clock, without a title, menu or border options.seconds Select the “Seconds” item from the “Options” menu The second hand/counter should be toggled either on or off depending on its current status options.date Select the “Date” item from the “Options” menu The date should be toggled either on or off depending on its current status double-click Double click, using the left mouse button, on the face of the clock The clock face should be replaced by the entire clock window A Andrews - Software Engineering: Software Testing'06

37 A Andrews - Software Engineering: Software Testing'06
Level 2 Markov Chain Submodel for the Change Window A Andrews - Software Engineering: Software Testing'06

38 A Andrews - Software Engineering: Software Testing'06
Data Dictionary Arc Label Input to be Applied Comments/Notes for Tester options.change Select the “Change Time/Date...” item from the “Options” menu All window features must be displayed in order to execute this command The change window should appear and be given the focus Tester should verify window appearance and modality and ensure that it accepts no illegal input end Choose either the “Ok” button or hit the cancel icon and return to the main window The change window will disappear and the main window will be given the focus Note: this action may require that the software be calibrated by issuing either an options.analog or an options.digital input move Hit the tab key to move the cursor to the other input field or use the mouse to select the other field Tester should verify cursor movement and also verify both options for moving the cursor edit time Change the time in the “new time” field or enter an invalid time The valid input format is shown on the screen edit date Change the date in the “new date” field or enter an invalid date A Andrews - Software Engineering: Software Testing'06

39 A Andrews - Software Engineering: Software Testing'06
Software Reliability Techniques Markov models Reliability growth models One particular aspect of SRE (Software Reliability engineering) that has received the most attention. There are many models have been proposed since 1970s. The basic idea: A software reliability model describe failures as a random process, which is characterized in either times of failures or the number of failures at fixed times. A reliability growth model is a mathematical model of the system reliability change as it is tested and faults are removed Used as a means of reliability prediction by extrapolating from current data Random Process (both error introduce and run selection process are random) The failures are independent each other (failure times are independent each other) with and without repair: two situations in testing phase Limitations: 1) model’s assumptions 2) future prediction, must noting the environment, using recent data. A Andrews - Software Engineering: Software Testing'06

40 Dimensions of dependability
A Andrews - Software Engineering: Software Testing'06

41 Costs of increasing dependability
w M e d i u m H i g h V e r y U l t r a - h i g h h i g h A Andrews - Software Engineering: Software Testing'06

42 Availability and reliability
The probability of failure-free system operation over a specified time in a given environment for a given purpose Availability The probability that a system, at a point in time, will be operational and able to deliver the requested services Both of these attributes can be expressed quantitatively A Andrews - Software Engineering: Software Testing'06

43 Reliability terminology
A Andrews - Software Engineering: Software Testing'06

44 Usage profiles / Reliability
Removing X% of the faults in a system will not necessarily improve the reliability by X%! Removing X% of the faults in a system will not necessarily improve the reliability by X%. A study at IBM showed that removing 60% of product defects resulted in a 3% improvement in reliability Program defects may be in rarely executed sections of the code so may never be encountered by users. Removing these does not affect the perceived reliability A program with known faults may therefore still be seen as reliable by its users A Andrews - Software Engineering: Software Testing'06

45 Reliability achievement
Fault avoidance Minimise the possibility of mistakes Trap mistakes Fault detection and removal Increase the probability of detecting and correcting faults Fault tolerance Run-time techniques Fault avoidance Development technique are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults Fault detection and removal Verification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service are used Fault tolerance Run-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures A Andrews - Software Engineering: Software Testing'06

46 Reliability quantities
Execution time is the CPU time that is actually spent by the computer in executing the software Calendar time is the time people normally experience in terms of years, months, weeks, etc Clock time is the elapsed time from start to end of computer execution in running the software Measures of Failure occurrences: Time of failure Time interval between failures Cumulative failures experienced up to a given time Failures experienced in a time interval Failure functions The cumulative function (mean value function) denotes the average cumulative failures associated with each point of time, The failure intensity function represents the rate of change of the cumulative failure function, The failure rate function (rate of occurrence of failures) defined as the probability that a failure per unit time occurs in the interval [t,t+t], given that a failure has not occurred before t. A Andrews - Software Engineering: Software Testing'06

47 A Andrews - Software Engineering: Software Testing'06
Reliability metrics The expected value of the time from a failure until the system can be executed again is denoted MTTR (mean time to repair). The expected time from that the system is being executed after a repair activity until a new failure occurs is denoted MTTF (mean time to failure) and the most expected time between two consecutive failures is denoted MTBF (mean time between failures). The two last terms (MTTF and MTBF) are dependent on the remaining number of software faults in the system. MTBF = MTTF + MTTR Availability = MTTF / MTBF A Andrews - Software Engineering: Software Testing'06

48 Nonhomogeneous Poisson Process (NHPP) Models
N(t) follows a Poisson distribution. The probability that N(t) is a given integer n is: Let N(t) be a random process representing the number of failures experienced by time t. Then (t), the mean value function, is defined as: (t)=E[N(t)], which represents the expected number of failures at time t. The counting process {N(t),t0} is modeled by NHPP m(t) = (t) is called mean value function, it describes the expected cumulative number of failures in [0,t) A Andrews - Software Engineering: Software Testing'06

49 The Goel-Okumoto (GO) model
Assumptions The cumulative number of failures detected at time t follows a Poisson distribution All failures are independent and have the same chance of being detected All detected faults are removed immediately and no new faults are introduced The failure process is modelled by an NHPP model with mean value function (t) given by: a and b are parameters to be determined by using collected failure data. Note that for this model we have ()= a and (0)=0 a is the final number of faults that can be detected by the test process. b is a constant of proportionality, can be interpreted as the failure occurrence rate per fault. The intensity function (t) is the derivative of (t) A Andrews - Software Engineering: Software Testing'06

50 A Andrews - Software Engineering: Software Testing'06
Goel-Okumoto The shape of the mean value function ((t)) and the intensity function ((t)) of the GO model (t) The intensity function (t) is the derivative of (t) (t) t A Andrews - Software Engineering: Software Testing'06

51 A Andrews - Software Engineering: Software Testing'06
S-shaped NHPP model (t)=a[1-(1+bt)e-bt], b>0 t (t) At beginning of testing, some faults are “covered” by other faults. Removing a detected fault at beginning does not decrease the failure intensity very much. Software reliability testing usually involves a learning process. Skills and effectiveness improve gradually. A Andrews - Software Engineering: Software Testing'06

52 The Jelinski-Moranda (JM) model
Assumptions Times between failures are independent, exponential distributed random quantities The number of initial failures is an unknown but fixed constant A detected fault is removed immediately and no new fault is introduced All remaining faults contribute the same amount of the software failure intensity The times between failures are assumed to be independently exponentially distributed. This means that, if Xi denotes the time between the (i-1):th and i:th failure, then probability density function of Xi will be fXi… where is the failure intensity after the (i-1):th failure has occurred (and before the i:th failure has occurred). In the Jelinski-Moranda model, is assumed to be a function of the remaining number of failures and is derived as where N is the initial number of faults in the program and is a constant. A Andrews - Software Engineering: Software Testing'06

53 A Andrews - Software Engineering: Software Testing'06
Next weeks This week Read project description thoroughly, Decide subject Optional exercise tomorrow – Project Next week (April 7, April 12) Lab 1 – Black-box testing A Andrews - Software Engineering: Software Testing'06


Download ppt "Advanced Software Engineering: Software Testing COMP 3702"

Similar presentations


Ads by Google