Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide 1. Slide 2 Outline Protocol Verification / Conformance Testing TTCN Automated validation Formal design and specification Traffic Theory Application.

Similar presentations


Presentation on theme: "Slide 1. Slide 2 Outline Protocol Verification / Conformance Testing TTCN Automated validation Formal design and specification Traffic Theory Application."— Presentation transcript:

1 Slide 1

2 Slide 2 Outline Protocol Verification / Conformance Testing TTCN Automated validation Formal design and specification Traffic Theory Application Debugging

3 Slide 3 Testing of the Telecommunications Systems Functional Tests Formal System Tests against system level requirements Protocol Verification Performance/Load Tests Interoperability Tests- IOT (Inter-working Tests) Automatic Test Suites

4 Slide 4 Protocol Testing Conformance Testing –Performed utilising test equipment –Based on Conformance Test Suites/Cases specified by standard body –Are pretty detailed in terms of protocol implementation verification –Not good in testing performance/load issues Inter-working Trials –Testing of the real equipment inter-working –Good for performance/load testing –Hard to completely verify protocol

5 Slide 5 Conformance Testing Based on ISO/IEC 9646 (or X.290) –"Framework and Methodology of Conformance Testing of Implementations of OSI and CCITT Protocols.” Abstract Test Suites (ATS) consisting of Abstract Test Cases Test Cases are defined using “Black Box”model, only by controlling and observing using the external interfaces. Provide basis for Generic Test Tools and Methods for verification of Telecommunication Standards and Protocols

6 Slide 6 ITU Conformance Testing Standards X.290 - OSI conformance testing methodology and framework for protocol recommendations X.291 Abstract test suite specification X.292 The Tree and Tabular Combined Notation (TTCN) X.293 Test realization X.294 Requirements on test laboratories and clients for the conformance assessment process X.295 Protocol profile test specification X.296 Implementation conformance statements

7 Slide 7 Conformance Testing Verification that protocols are conforming to Standard Requirements PICS - Protocol Implementation Conformance Statement –Information provided by protocol implementator on the extent of the implementation: what optional items are implemented, is there any restrictions … PIXIT - Protocol implementation extra information –provides information regarding the physical configuration of unit under test: eg. Telephone numbers, socket numbers...

8 Slide 8 Tree and Tabular Combined Notation TTCN Defined as part of ISO/IEC 9646 (X.290) Used as a language for formal definition of test cases Each test case is an event tree in which external behavior such as: "If we send the message 'connect request,' either 'connect confirm' or 'disconnect indication' will be received" are described. Messages can be defined using either the TTCN-type notation or ASN.1.

9 Slide 9 TTCN Tests Reactive Systems IUT StimulusResponse PCO PCO = Point of Control and Observation IUT = Implementation Under Test

10 Slide 10 A Typical Test Architecture IUT L U Lower TesterUpper Tester Underlying Service Provider

11 Slide 11 Parallel Test Architecture IUT Underlying Service Provider(s) CP MTC MTC = Main Test Component PTC = Parallel Test Component CP = Coordination Point CP PTC

12 Slide 12 TTCN Formats TTCN is provided in two forms: a graphical form (TTCN.GR) suitable for human readability a machine processable form (TTCN.MP) suitable for transmission of TTCN descriptions between machines and possibly suitable for other automated processing.

13 Slide 13 Main Components of TTCN Test Suite Structure Data declarations (mainly PDUs) –TTCN data types –ASN.1 data types Constraints –i.e., actual instances of PDUs Dynamic behaviour trees (Test Cases) –Send, Receive, Timers, Expressions, Qualifiers –Test Steps

14 Slide 14 Test Suite Structure TTCN Test suite consist of 4 major parts –suite overview part –declarations part –constraints part –dynamic part

15 Slide 15 Suite Overview Part The suite overview part is a documentary feature comprised of indexes and page references. This is part of the heritage of TTCN as a paper-oriented language. It contains a table of contents and a description of the test suite, and its purpose is mainly to document the test suite to increase clarity and readability. In this part of the test suite, a quick overview of the entire test suite is possible.

16 Slide 16 Test Suite Overview Proforma

17 Slide 17 Suite Declarations Part The declarations part is used for declaring –types, –variables, –timers, –points of control and observation (PCOs), –test components. The types can be declared using either TTCN or ASN.1 type notation. Graphical table is used for declarations

18 Slide 18 TTCN Data Types There is no equivalent to pointer types and, as a consequence of that, types cannot be directly or indirectly recursive. Two structured types that are specific –protocol data unit (PDU) – data packets sent between peer entities in the protocol stack –abstract service primitive (ASP) - - a data type in which to embed a PDU for sending and receiving

19 Slide 19 Example of ASN.1 Type Definition

20 Slide 20 Example of TTCN Type Definition

21 Slide 21 Variable Declarations

22 Slide 22 Constraints Part In this part constraints are used to for describing the values sent or received. The instances used for sending must be complete, but for receiving there is the possibility to define incomplete values using wild cards, ranges and lists.

23 Slide 23

24 Slide 24 Dynamic Part In this part actual tests are defined Contains Test Suite with –test groups – a grouping of test cases. It might, for example, be convenient to group all test cases concerning connection establishment and transport into a separate test group –test steps – a grouping of test events, similar to a subroutine or procedure in other programming languages –test events – the smallest, indivisible unit of a test suite. Typically, it corresponds to sending or receiving a message and the operations for manipulating timers

25 Slide 25 TTCN Behavior Tree Suppose that the following sequence of events can occur during a test whose purpose is to establish a connection, exchange some data, and close the connection. –a) CONNECTrequest, CONNECTconfirm, DATArequest, DATAindication, DISCONNECTrequest. Possible alternatives to “valid behaviour” are –b)CONNECTrequest, CONNECTconfirm, DATArequest, DISCONNECTindication. –c)CONNECTrequest, DISCONNECTindication.

26 Slide 26 TTCN Behavior Tree

27 Slide 27 TTCN Behavior Table The behavior tables build up the tree by defining the events in lines on different indention levels. The rows on the same indention are events that describe alternatives, and row on the next indention are the continuation of the previous line. The line can consist of one or a number of the following: –send statement – ! – states that a message is being sent –receive statement – ? – states that a message is being received –assignment – ( ) – there is an assignment of values –timer operations – start, stop or cancel a timer –time-out statement –Boolean operations – [] – qualifying the execution –attachment – + – acts as a procedure call

28 Slide 28 TTCN Behavior Table All the leaves in the event tree are assigned a verdict that can be: –pass – the test case completed without the detection of any errors –fail – an error was detected –inconclusive – there was insufficient evidence for a conclusive verdict to be assigned, but that the behavior was valid A verdict can be final or preliminary – it will not terminate the active test case execution. To describe what is happening in the test case, the dynamic behavior can be explained in plain language.

29 Slide 29 Dynamic Behaviour Table

30 Slide 30 TTCN MP Form

31 Slide 31 Testing and validation. To test an implementation of a protocol with an SDL specification the following approach can be taken. Get the interface into the appropriate state using a preamble. Send the appropriate message. Check all outputs Check the new state the protocol is in. Perform a postamble to return the protocol to the null state.

32 Slide 32 TTCN What do you need to know To understand Terminology To understand test principles and the test environment You don’t need to know how to write TTCN scripts, but you have to understand them so you can read test reports and analyse problems

33 Slide 33 Various Test Equipment Numerous Test Equipment supports the TTCN Protocol Analyzers Protocol Simulators Protocol Emulators

34 Slide 34 Protocol Analysers (Monitors) –Monitors of the ‘live’ communication between equipment and interprets messages in human readable form. –Same H/W may support monitoring of different protocol types –It may provide some statistical analysis capability –It may have advanced search/filtering capabilities. –You may define event on which logging would start or stop. Some Equipment Some Equipment Protocol Analyser

35 Slide 35 Protocol Simulators –Allows Users to write a scripts (similar to TTCN) which will send a messages and react on received messages –It can be used for testing various protocols –It may be quite big effort to write required amount of scripts –Suppliers often provide a set of test scripts targeting particular protocol Equipment Under Test Protocol Simulator

36 Slide 36 Protocol Emulators –Emulates operation of equipment (not full implementation) –No script writing required –It provides some control of behavior and configuration –Not flexible as simulator –Usually one box can contain Protocol Analyser, Simulator and Emulator Equipment Under Test Protocol Emulator

37 Slide 37 Interoperability Testing (IOT) Testing what users may expect from product, which may include more then demanded by standards: performance, robustness and reliability Proper functioning in the system where product is finally installed Interoperability with applications which use the product IOT is usually performed in addition to Formal Protocol Conformance Testing

38 Slide 38 Pros & Cons of IOT Potential IOT Problems –Not covering all possible protocol scenarios –Difficult to reproduce detected problems –Often little possibility to actively control the test environment so investigation can be done The pros of IOT –Standard may be ambiguous so different manufactures implement differently –Performance/Load aspects easier to test –IOT may be cheaper and faster –It gives the confidence to user in the system operational capabilities

39 Slide 39 The Case for Automated Validation If a standard is ratified by a standards body that has not been validated and contains errors the following may occur: –Some implementations of the protocol may exist with serious errors that will affect service operation and service assurance –Different implementations of the protocol, that are bug free will have proprietary solutions to overcome protocol errors and thus may not inter-work –Some vendors may choose to implement only a subset of standardised protocols, relying on proprietary protocols.

40 Slide 40 The Case for Automated Validation Even with simple protocols containing a small number of functional entities, messages and states the number of possibilities will be more than most people will have time to verify by hand. Communications protocol implementations are so complex, that it is likely to be impossible to test all protocol combinations on a system that has implemented a protocol. In reality standards committees are design by committee, and although some members perform automatic validation, most don’t, and protocols are modified to cope with bugs as they appear

41 Slide 41 State Explosion Problem Even with simple protocols containing a small number of functional entities, messages and states the number of possibilities will be more than most people will have time to verify by hand. Some problems have more permutations than a computer can go through, due to limitations on processing speed and memory.

42 Slide 42 Travelling salesman problem A salesman spends his time visiting n cities (or nodes) cyclically. In one tour he visits each city just once, and finishes up where he started. In what order should he visit them to minimise the distance travelled? If every city is connected to every other city, then the number of possible combinations is (n-1)!/ 2. For only 100 cities the number of possible combinations could be as high as 4.67 x 10 155 There is a mathematical proof to show that for the number of towns in the US that the time required to list all combinations would be greater than the estimated length of the universe, and would require more memory than there are atoms in the universe, according to current physics theory.

43 Slide 43 Automated analysis tools. Through the use of automatic tools, that traverse through all the possible sets of states number of errors can be easily detected: –livelock and deadlock –output with no receiver –output with multiple receivers –over the maximum number of instances for a process –decision value not expected in the set of answers –unreachable states

44 Slide 44 Automated analysis tools. Errors related to data access, for example: –variable usage not compliant with its type –array overflow Automated tools will not be able to detect undesirable behaviour due to poor specification

45 Slide 45 Traditional Design Traditional design involves the following design cycle High Level Design –Requirements are written usually in English (or equivalent) –Protocol is then defined using a formal techniques such as SDL, MSC and ASN.1 Low Level Design Coding and testing –Most the effort occurs here. Coding and debugging thus becomes the focus of the protocol design

46 Slide 46 Formal Design The later an error is detected the more expensive it is to fix. Formal design techniques (FDT) are used to verify the correctness of the protocol on the high level design phase. This involves testing the protocol description as defined in a specification languages such as SDL. This requires –An unambiguous notation –Effective validation tools

47 Slide 47 Formal Protocol Specification Languages and Protocol Validation Formal protocol definition languages such as SDL can be used as input to protocol validation tools. Three common description languages used for this approach are SDL, Lotos and Estelle.

48 Slide 48 FDTs are also intended to satisfy objectives such as: a basis for analyzing specifications for correctness, efficiency, etc.; a basis for determining completeness of specifications; a basis for verification of specifications against the requirement of the Recommendation; a basis for determining conformance of implementations to Recommendations; a basis for determining consistency of specifications between Recommendations; a basis for implementation support.

49 Slide 49 Other Formal Specification Languages These other languages are discussed in the literature, but are beyond the scope of this subject and are not used by ITU-T: CCS (the Calculus of Communicating Systems) Lotos (Language for Temporal Ordering Systems): mathematically defined language Z Specification Language: a language based upon set theory and first-order predicate calculus. VDM (Vienna Development Method)

50 Slide 50 ITU Programming Languages ITU standardises the following formal description techniques –Z.100 CCITT Specification and description language (SDL) –Z.105SDL combined with ASN.1 (SDL/ASN.1) –Z.120 Message sequence chart (MSC) ITU also define the following telecommunications languages –Z.200CCITT high level language (CHILL) Common text with ISO/IEC –Z.30xIntroduction to the CCITT man-machine language

51 Slide 51 Performance analysis When a new protocol or service is introduced into a network, there needs to be an understanding what resources are required to implement the service. Performance analysis is the task of finding a services resource requirements are. This is used to cost the service at the cheapest price, with the highest probability that the targeted quality of service is delivered. There is a branch of statistics devoted to understanding the use of communications networks known as traffic theory.

52 Slide 52 Debugging Embedded Real Time Applications Usual PC Debuggers and Tools not applicable Hardware environment may not be stable Difficult to differentiate between Hardware of Software problems Problems may be difficult to reproduce and capture CPUs used are quite frequently new and not fully debugged

53 Slide 53 Firmware Debugging Tools Romulators Logic Analysers FW Simulation on Unix/DOS FW Application Monitor In-Circuit Emulator ICE CPU Performance Analysers On Chip Debugger (BDM)

54 Slide 54 ROMULATORS Replacement of the ROM with RAM which is controlled by the PC (for example) Instead of burning the ROM every time code has to be modified the new executable is downloaded to RAM Very cheap No support for debugging –no control or the CPU –no visibility or CPU activity in addition to what is available trough the application interface

55 Slide 55 Logic Analysers Device capturing all signal activities on the Processor Pins With addition of specific SW signal trace may be represented in form of C/C++ Code Logic Analyser is very common HW Test Tool Doesn’t provide ability to control the target CPU No monitoring of internal CPU registers available

56 Slide 56 FW Simulators Customised (cut-down) version of the FW running on the UNIX or DOS This allows use of UNIX/DOS tools –Full blown Debuggers –Run time error detection tools »detection of the memory leakage »detection of uninitialised variables »stack overflows/underflows »Reading or writing past the boundary of an array »Using Freed memory –Performance Analysers (Quantify,...)

57 Slide 57 Built-In Debug Monitors Debug facility integrated with the application It may print-out relevant debugging info It may also be interactive where user can partially control the application and request some info from FW Assumes that HW is working well and that FW is stable enough. It is not good for debugging applications which are crashing - resetting Use of Monitor may affect the speed of application which is sometimes not acceptable

58 Slide 58 In-Circuit Emulators (ICE) Provides full control of FW (CPU) –Break Points on different conditions (HW or SW) –Modification of Registers and Memory areas –Single Step execution –Source Level Tracing Allows running FW at full speed Application may run from “Emulation” RAM instead from ROM (this allows fast modification) RTOS Support is issue May be expensive (20-50 K)

59 Slide 59 CPU Performance Analysers Provides ability to determine how much time CPU is spending executing particular area of code. This allows to perform the performance analysis and tuning In case of Analysers which are RTOS aware CPU utilization per task can be displayed

60 Slide 60 Performance Analyser Example

61 Slide 61 CPU Background Debug Mode (BDM) Some microprocessors provide debugger implemented in CPU micro-code (BDM or JTAG) Full debug support available: registers can be viewed and modified, memory can be read or written... On chip HW break-point support Control via dedicated serial interface on chip connected to PC running debugger application This is relatively cheep alternative to ICE (2-5K) No Tracing of application execution available For tracing combination of Logic Analyser and BDM is required

62 Slide 62 Debugging Tools Summary Cheep but very limited tools (no control of CPU) –Romulators –Logic Analyser Built-in monitor allows some control of CPU but requires stable HW & FW platform FW simulation provides good flexible environment but it may not represent real-time aspect of real target Best Solutions for Low Level Debugging –In Circuit Emulator (ICE) - It can be expensive –BDM with addition of Logic Analyser for tracing


Download ppt "Slide 1. Slide 2 Outline Protocol Verification / Conformance Testing TTCN Automated validation Formal design and specification Traffic Theory Application."

Similar presentations


Ads by Google