Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 11, Testing: Model-based Testing and U2TP

Similar presentations


Presentation on theme: "Chapter 11, Testing: Model-based Testing and U2TP"— Presentation transcript:

1 Chapter 11, Testing: Model-based Testing and U2TP
Note to Instructor: Some of the material in this slide set is not contained in the 3rd edition of the text book It is planned for the 4th edition.

2 Outline of the Lectures on Testing
Terminology Test Model Model-based Testing Model-driven Testing U2TP

3 Model Driven Architecture (MDA)
Recall: MDA focuses on forward engineering, i.e. producing code from a system model Note: The term “architecture” in MDA does not mean the architecture of the system being modeled, but refers the architecture of the various standards and models that serve as the technology basis for MDA UML: Unified Modeling Language MOF: Meta-Object Facility XMI: Metadata Interchange EDOC: Enterprise Distributed Object Computing SPEM: Software Process Engineering Metamodel CWM: Common Warehouse Metamodel An important objective of MDA is executable UML.

4 Executable UML Executable UML means that given a UML system model the following steps can be performed automatically: Code generation Simulation Validation Test generation (“Model based Testing”)

5 Generating Tests from a System Model
There are many different ways to "derive" tests from a system model Manual generation Automatic generation Because testing is usually experimental and based on heuristics, there is no one best way to do this It is common to consolidate all test related decisions into a package that is often known as "test requirements” or "test package” This test package can contain e.g. information about the part of the system model that should be the focus for testing, or about the conditions where it is correct to stop testing (test stopping criteria).

6 Test Model The Test Model consolidates all test related decisions and components into one package (sometimes also test package or test requirements) The test model contains tests, input data, the oracle and the test harness A test driver (the program executing the test) The input data needed for the tests The oracle comparing the expected output with the actual test output obtained from the test The test harness A framework or software components that allow to run the tests under varying conditions and monitor the behavior and outputs of the SUT Test harnesses are necessary for automated testing. The test harness: A collection of software or a framework testing the SUT by running it under varying conditions and monitoring its behavior and outputs.

7 Model-Based Testing Definition: Model Based Testing The system model is used for the generation of the test model Definition: System under test (SUT) (Part of) the system model which is being tested Advantages of model-based testing: Increased effectiveness of testing Decreased costs, better maintenance Reuse of artifacts such as analysis and design models Traceability of requirements Minor Variant: Extreme Programming “Construct the test model first, before the system model” Test Model System Model System under Test (SUT)

8 Observations on Model Based Testing
A system model is an abstract representation of the system’s desired behavior The test cases derived from this model are functional tests on the same level of abstraction as the model These test cases are known as the abstract test suite Because test suites are derived from models - not from source code - model-based testing is a form of black-box testing The abstract test suite cannot be executed An executable test suite must be derived from the abstract test suite which communicates with the system under test (SUT) This is done by mapping the abstract test suite to concrete test cases suitable for execution. This mapping is not part of our lecture.

9 Model-Driven Testing (MDT)
Model-Driven Architecture (MDA) The system model can be separated into a platform independent system model (PIM) and a platform specific system model (PSM) A PIM describes the system independently from the platform that may be used to realize the system A PIM can be transformed into a PSM. PSMs contain information on the underlying platform In another transformation step, the system code is derived from the PSM The completeness of the system code depends on the completeness of the system model Model-driven testing has its roots in this idea. Similar to MDA, Model-driven testing distinguishes between: Platform independent test models (PIT) Platform specific test models (PST) Test code generated from these models.

10 Model-Driven Testing System Test Model Model
System models are transformed into test models When the system model is defined at the PIM level, the platform-independent test model (PIT) can be derived When PSM level is defined, the platform-independent test model (PST) can be derived The PST can also be derived by transforming the PIT model Executable test code is then derived from the PST and PIT models After each transformation, the test model may have to be enriched with test specific properties. Examples: PIT and PST models must cover unexpected system behavior, special exception handling code must be added to the test code Test control and deployment information is usually added at the PST level System Model Test Model Model-driven testing enables the early integration of testing into the system development process.

11 Automated Testing There are two ways to generate a test model
Manually: Developers set up the test data, run the test and examine the results themselves. Success or failure is determined through observation by the developers Automatically: Automated generation of test data and test cases. Running the test is also done automatically, and finally the results (success and/or failure) is also automatically investigated Definition Automated Testing All tests are automatically executed with a test harness Advantage of automated testing: Less boring for the developer Better test thoroughness Reduces the cost of test execution Indispensible for regression testing. In the current state of affairs, the generation of a test model is not formalizable, but based on experiments and heuristics

12 U2TP: A UML Profile for Model Based Testing:
In 2001, the OMG released a Request for Proposals (RFP) for a Testing Profile for UML to support model based testing Several companies and institutions (Ericsson, Fraunhofer/FOKUS, IBM, Motorola, Rational, Softeam, Telelogic, University of Lübeck) responded to this RFP, and after some discussion, decided to work together as a consortium to produce U2TP U2TP: UML 2 based Test Profile

13 U2TP UML-2 Testing Profile (U2TP): Test Model:
A UML extension that makes UML applicable to software testing. U2TP allows the collection of all information required for the model-based testing process Deriving a test model from the system model. Test Model: A set of test cases derived from the system model.

14 Overview of Concepts used in U2TP
Test behavior Test objective, test case Default behavior, verdict Test architecture Test component, test configuration, test context SUT (System under Test) Test control, arbiter, scheduler Test Data Wildcards, data pools, data partition, data selector, coding rules Time Concepts Timers, time zone.

15 General Definitions used in U2TP
Test: An attempt to create a difference between the observed behavior and the specified behavior of a system a planned way Test data: The structures and meaning of values to be processed in a test System under test (SUT) A system, subsystem or component The SUT is treated as black-box. This means: The SUT can be accessed only via a public interface No information on the internal structure of the SUT is available for use in the specification of test cases. Test case: Describes a specified behavior, that is, a behavior specified in the system model (normative behavior) of the SUT.

16 Test Objective Test objective UML Example
Find any difference between the observed behavior and the specified behavior, that is, the behavior specified in the system model. UML Example The test objective in the diagram below specifies that the objective of the ValidWithdrawal test case is to validate the WithdrawMoney use case. <<testcase>> ValidWithdrawal <<usecase>> WithdrawMoney <<objective>>

17 Test Architecture Definitions
Test component: An object within a test system Test architecture: The test components and their relationships in a test system Testing configuration: The combination of a test system with a SUT Test control: A ordering of the execution of the tests performed on the SUT Test context: A description of A set of test cases A test configuration A test control.

18 Mandatory Elements in a test based on U2TP
A test system following the U2TP testing profile must consists at least of the following parts: Test architecture: SUT, Test Context Test behavior: TestCase, Verdict. Architecture Behavior Data Time SUT Objective Timer TestContext TestCase StartTimerAction TestConfiguration Observation StopTimerAction TestComponent Stimulus TimeOut Arbiter Verdict Mandatory Elements

19 Example: Test System for a Vending Machine
Drink Vending Machine (DVM) with 3 use cases: The drinking machine accepts cash payment If the inserted money is sufficient, a drink is returned If not, the money is returned.

20 Model-based Testing of Moneybox
Test System SUT The test system MoneyUnitTest is developed with the purpose of unit testing the class Moneybox, which is part of the system under test Moneybox must be imported by MoneyUnitTest to be accessible.

21 UML System Model of the Moneybox

22 Testing the addMoney Operation (Unit Test)
Test that the machine correctly counts the money inserted by the user into the Moneybox.

23 Sequence Diagram for addMoney
:customer money1:Money cashRegister :Money(x) :Money(y) :addMoney(money1) money2:Money :addMoney(money2) money3:Money :Money(z) :getTemporaryAmout() :money4 :equals(money4) :true

24 Model-based Testing of Moneybox
Test System SUT TestContext Test Case MoneyTest is the test context MoneyUnitTest is the test case

25 Definition Test Case Test case: A test case implements a test objective. It describes a specified behavior, that is, a behavior specified in the system model (normative behavior) of the SUT A test case is defined in terms of sequences, alternatives, loops, and defaults of stimuli to and observations from the SUT. A test case consists of Flow of events (use case, sequence diagram) Stimulus to and Observation from the SUT Verdict: Assessment of the correctness of the SUT Arbiter: Evaluates the outcome of the test.

26 Test Case A Test case specifies how a set of test components interact with the system under test realize a test objective A test case is owned by a test context Components of a test case Event flow: Sequence of steps to execute the SUT May include timing information Input: Stimulus Output: Observation Expected result: Oracle Verdict: Observation equals Oracle Comparison done by an arbiter A test case may invoke other test cases.

27 Sequence Diagram for addMoneyTest
Instantiated SUT Stimulus

28 Stimulus and Observation
Stimulus: Data sent to the SUT stimulus() Observation: Data returned from the SUT as a result of a stimulus. observation Both, stimuli and observations are described as Test data.

29 Stimulus and Observation in addMoneyTest
Instantiated SUT Stimulus Observation

30 Test Data Test Data: A generalization of stimulus and observation. There are three types of test data: Data pool: Test data which are unspecific in their properties Data partition: Test data which are unspecific in their values Wild card: Test data which are unspecific in certain elements Data selector: An operation that defines how data values or equivalence classes are selected from a data pool or data partition. Coding rules: Describe the encoding/decoding of test data to be sent to the interfaces of the SUT.

31 Data Pool Data pools are a collection of data partitions or explicit values that are used by a test context, or test components, during the evaluation of test contexts and test cases. They can be used for repeated tests. Example see figure on next slide

32 Data Pool Example Figure above is a package illustrating the data pool, data partition and data selector concepts. The TestData package defines for TrxnData the data pool DataPool and the data partitions EUTrxnData and USTrxnData. The data partitions have two data samples defined each. Data selectors getEUTrxnData, getUSTrxnData and getDistributionInterval are used for the access to the data pool and the data partitions.

33 Verdict A test verdict describes the result of a test case execution:
Pass indicates that the specified behavior equals the observed behavior of the SUT for that specific test case Fail describes that the specified behavior is not equal to the observed behavior of the SUT for that test case Inconclusive is used if neither a Pass nor a Fail verdict can be given Error is used to indicate a failure in the test case itself.

34 Verdict in addMoneyTest
Instantiated SUT Stimulus Observation Verdict

35 Scheduler: Predefined interface in U2TP that controls the execution of the test components that take part in a test case Keeps control over the creation and destruction of test components Collaborates with the arbiter to inform it when it is time to issue a verdict. Operations Scheduler() : Constructor of Scheduler. Starts SUT and Arbiter. startTestCase() : Starts test case by notifying all involved test components. finishTestCase(t:TestComponent) : Records that test component t has finished its execution. Notifies the arbiter, when all test components in the current test case are finished createTestComponent(t:TestComponent) : Records that the test component t has been created by some other test component.

36 Arbiter Arbiter: Called by the scheduler. Evaluates the execution of the test case and assigns a verdict to it Arbiter is an interface provided by the UML Testing Profile. Operations getVerdict: Returns the current verdict setVerdict: Sets a new verdict value Semantic of Arbiter: If a verdict is pass, it can be changed to inconclusive, fail, or error. If a verdict is inconclusive, it can be changed to fail or error. If a verdict is fail, it can be changed to error only. If a verdict is error, it cannot be changed.

37 Testing the DVM (Integration Test)
Test Objective Check the operation of the DVM when a user buys a drink System is not yet complete: HWControl and CashRegister are not yet developed. Provided interface Required interface Show the public parts of the packages. Controller is the center of our Faute tests in diagram: one must have CashRegister which establishes ICashRegister and Money which establishes IMoney. In more it is MoneyBox.

38 Develop the Test Components
Test components implement the interfaces of emulated components The test components also interact with SUT TestComponents Contradiction enters the 2 diagrams: money is emulated or not? §Extends the interface bus so that if one realized it testComponent one can use it in the place of testComponent without changing the code (see MockObject). To add IController for cash register . How does a Component test function? How to make the emulation? TestPackage: Elements necessary to specify the test together. Import Controller for xxx with the elements to test.

39 Test Configuration Test Configuration: The test configuration defines instances of the test component (test objects and the connections between them) Initial test configuration: State of the test objects at the beginning of a test UML Example: Testing Configuration for the Vending machine Next slide.

40 Test Configuration for the Vending Machine
Collection of test components and ports between these components and the SUT Defines the connections when the test case is launched. the maximum number of ports and test components during the execution of the test case. Controller is the center of our tests. What is the difference with the preceding diagram?

41 Timer Timer is a mechanism to generate a timeout event occurrence
This may occur when a pre-specified time interval has expired Timers are defined as properties of test components A timeout indicates the timer expiration A timer can be stopped. The expiration time of a running timer and its current status (e.g., active/inactive) can be checked.

42 Timer (2) StartTimerAction StopTimerAction TimeOut Example:
Semantics: Start Timer t1 with the value 1.5 sec Parameter: duration Notation: StopTimerAction Semantics: Stop Timer TimeOut The time specified as duration in StartTimer has expired Example: Test that the hardware provides a drink (“giveDrink”) within 1.5 seconds after a drink has been selected.

43 Timer Example Start the Timer t1 Stop the Timer t1

44 Wildcards Wildcards are special symbols representing values or ranges of values They specify whether the value is present or not, and/or whether it is of any value Wildcard types: Wildcard for any value Example: “?” (question mark) Wildcard for any value or no value at all (i.e. an omitted value) Example: “*” (star) Wildcard for an omitted value

45 Additional Readings XMI MDA UML Testing Profile Tutorial
MDA UML Testing Profile Tutorial Ina Schieferdecker, Øystein Haugen, 2004

46 Summary Review of all the definitions introduced in this part of the lecture. Writing test cases for a use case Write the use case Determine SUT Add Stimulus, Observation and Verdict to use case, turning it into a test case.

47 Additional Slides

48 Properties of a Good Test Model
Scope Each test in the test model should focus on a specific aspect. For unit testing the normal scope is a class or a subsystem (functionally closely coupled classes). For integration testing, the scope is the possible subsystem combinations. For system testing the scope is the full system with respect to functional and nonfunctional requirements Repeatability For the same setup of a test the same results should be produced. Multiple developers should be able to execute tests at the same time. When a test depends on a shared state, the execution of two tests can lead to a race condition Example: if a test modifies a database and several developers are running it simultaneously, the data may be corrupted and one of the tests might fail even though the code is correct.  Independence Tests should not rely on other tests to be run before or after. They must be able to be run alone so the order of the tests is not critical. ACID comparison: Repeatability vs Isolation: similar Independence vs Atomicity: independence is not really “all or nothing” – it is more concerned with independence of run-order, which again is closer to repeatability I think

49 Properties of a good Test Model (2)
Self containment (Independence from Environment) All the information required for running a test should be contained in the test. For example, instead of relying on an environment variable to be set, the environment variable should be set in the test Performance If running a test takes too long, developers are reluctant to run them. Tests should be as fast as possible and it should be possible to disable tests Maintainability Tests must be maintainable. In particular, when the system model changes, it must be easy to determine the tests which need to be changed as well Reusability of Tests A test should be reusable across many testing scenarios A test should applicable to more than one testing situation.

50 3 Approaches to Organize Test Cases
The first approach creates a separate directory structure in parallel with the SUT directory structure, i.e. with the same package alignment as the classes being tested The second approach creates a subpackage for all the test classes The third approach creates one test case class for a complete package For each of these approaches, all the tests must be part of the project build and release process Important that the system model, SUT code and tests do not get out of synch with each other during the life of the project. From:

51 Sequence Diagram for addMoney
:customer money1:Money cashRegister :Money(x) :Money(y) :addMoney(money1) money2:Money :addMoney(money2) money3:Money :Money(z) :getTemporaryAmout() :money4 :equals(money4) :true

52 Stimulus and Observation in addMoneyTest
Instantiated SUT

53 Defaults A default is a behavior triggered by a test observation that is not handled by the behavior of the test case per se. A default is triggered whenever a test observation is not handled by the behavior of a test case. There is a hierarchy of defaults: within the behavior of a test component object associated to a state (i.e. in a state machine) or to a combined fragment (in an interaction diagram); for the complete behavior of a test component object associated to a part (typed with a test component) in the internal structure of a test context (i.e. in a test configuration); for all test component objects of a test component associated to a test component in a test architecture. A UML specification is not necessarily complete. To be complete in this sense means that it specifies every possible trace of execution. In particular if Interactions are used to specify the behavior, the normal situation is that the specification is partial only specifying in detail those scenarios that are of particular importance. In a testing context, however, there is a need to have complete definitions such that the number of erroneous test case executions can be kept to a minimum. The Default specifications are units of specification defined in the Testing Profile as a means to make partial definitions of test components complete in a compact, yet flexible way. The Testing Profile defines mechanisms for defaults on Interactions as well as State Machines. The general idea about defaults is the following. A test behavior specification typically describes the normative or expected behaviors for the SUT. However, if during test execution an unexpected behavior is observed, then a default handler is applied. We have included default behavior definitions on several different levels. If the innermost default fail to recognize the observed behavior, the default of the next level is tried.

54 Default Examples

55 Validation Action A validation action is an action performed by a
test component to assess a test observations and/or additional characteristics/parameters. A notation to model validation actions is still to be defined. Example


Download ppt "Chapter 11, Testing: Model-based Testing and U2TP"

Similar presentations


Ads by Google