SEBASE 2007 Testing from Finite State Machines R. M. Hierons Brunel University, UK

Slides:



Advertisements
Similar presentations
Black Box Checking Book: Chapter 9 Model Checking Finite state description of a system B. LTL formula. Translate into an automaton P. Check whether L(B)
Advertisements

Analyzing Regression Test Selection Techniques
Lecture 24 MAS 714 Hartmut Klauck
Using the Optimizer to Generate an Effective Regression Suite: A First Step Murali M. Krishna Presented by Harumi Kuno HP.
Formal Modelling of Reactive Agents as an aggregation of Simple Behaviours P.Kefalas Dept. of Computer Science 13 Tsimiski Str Thessaloniki Greece.
Mahadevan Subramaniam and Bo Guo University of Nebraska at Omaha An Approach for Selecting Tests with Provable Guarantees.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Timed Automata.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
1 Software Testing and Quality Assurance Lecture 13 - Planning for Testing (Chapter 3, A Practical Guide to Testing Object- Oriented Software)
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
Formal Methods in Software Engineering Credit Hours: 3+0 By: Qaisar Javaid Assistant Professor Formal Methods in Software Engineering1.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Penn ESE 535 Spring DeHon 1 ESE535: Electronic Design Automation Day 13: March 4, 2009 FSM Equivalence Checking.
ECE Synthesis & Verification1 ECE 667 Spring 2011 Synthesis and Verification of Digital Systems Verification Introduction.
1 Today Another approach to “coverage” Cover “everything” – within a well-defined, feasible limit Bounded Exhaustive Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Penn ESE 535 Spring DeHon 1 ESE535: Electronic Design Automation Day 22: April 23, 2008 FSM Equivalence Checking.
Department of CIS University of Pennsylvania 1/31/2001 Specification-based Protocol Testing Hyoung Seok Hong Oleg Sokolsky CSE 642.
25/06/2015Marius Mikucionis, AAU SSE1/22 Principles and Methods of Testing Finite State Machines – A Survey David Lee, Senior Member, IEEE and Mihalis.
Presenter: PCLee Design Automation Conference, ASP-DAC '07. Asia and South Pacific.
Parameterizing Random Test Data According to Equivalence Classes Chris Murphy, Gail Kaiser, Marta Arias Columbia University.
Carmine Cerrone, Raffaele Cerulli, Bruce Golden GO IX Sirmione, Italy July
1 Software Testing and Quality Assurance Lecture 5 - Software Testing Techniques.
Software Testing and QA Theory and Practice (Chapter 10: Test Generation from FSM Models) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Protocol Analysis/Testing Based on Sidhu et al in IEEE TSE 89 and TN 93 Figures from the papers.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
TTCN-3 User Conference Directions in Model Based Testing R. M. Hierons Brunel University, UK
AMOST Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology.
Domain testing Tor Stålhane. Domain testing revisited We have earlier looked at domain testing as a simple strategy for selecting test cases. We will.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Department of CS and Mathematics, University of Pitesti State-based Testing is Functional Testing ! Florentin Ipate, Raluca Lefticaru University of Pitesti,
CALTECH CS137 Spring DeHon CS137: Electronic Design Automation Day 9: May 6, 2002 FSM Equivalence Checking.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Mutation Testing G. Rothermel. Fault-Based Testing White-box and black-box testing techniques use coverage of code or requirements as a “proxy” for designing.
- 1 -  P. Marwedel, Univ. Dortmund, Informatik 12, 05/06 Universität Dortmund Validation - Formal verification -
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
CPSC 871 John D. McGregor Module 8 Session 1 Testing.
Finite State Machines (FSM) OR Finite State Automation (FSA) - are models of the behaviors of a system or a complex object, with a limited number of defined.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Test Coverage Coverage can be based on: –source code –object code –model –control flow graph –(extended) finite state machines –data flow graph –requirements.
Whole Test Suite Generation. Abstract Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Defect testing Testing programs to establish the presence of system defects.
CPSC 372 John D. McGregor Module 8 Session 1 Testing.
Introduction to Software Testing (2nd edition) Chapter 5 Criteria-Based Test Design Paul Ammann & Jeff Offutt
Functional testing, Equivalence class testing
Advanced Algorithms Analysis and Design
Finite state machine optimization
Finite state machine optimization
Generating Automated Tests from Behavior Models
John D. McGregor Session 9 Testing Vocabulary
Dr. Eng. Amr T. Abdel-Hamid
Graph Coverage Criteria CS 4501 / 6501 Software Testing
Input Space Partition Testing CS 4501 / 6501 Software Testing
John D. McGregor Session 9 Testing Vocabulary
Testing Approaches.
John D. McGregor Session 9 Testing Vocabulary
It is great that we automate our tests, but why are they so bad?
Objective of This Course
ECE 553: TESTING AND TESTABLE DESIGN OF DIGITAL SYSTES
Graph Coverage Criteria CS 4501 / 6501 Software Testing
Presentation transcript:

SEBASE 2007 Testing from Finite State Machines R. M. Hierons Brunel University, UK

SEBASE 2007 A Type of Model Based Testing At its simplest: –We build a model –We use this model to help in testing A model represents what is to be tested. It need not model the entire system and can leave out many details (abstraction). Models are often much simpler than requirements.

SEBASE 2007 Why use MBT? Benefits include: –Automating test generation –Automating checking of results –Validating requirements/design through building model –Regression testing – change the model not the tests But –There is an initial cost of building the model –Building a model requires particular skills/training –The model may be wrong

SEBASE 2007 Why Finite State Machines? Original motivation: –Hardware Then: –Communications protocols More generally: –Systems with a persistent state are often specified as FSMs or extended FSMs (using e.g. statecharts, SDL).

SEBASE 2007 Finite State Machines s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1 The behaviour of M in state s i is defined by the set of input/output sequences from s i

SEBASE 2007 The Semantics of an FSM The behaviour of M in state s is defined by the set of input/output sequences from s: L M (s) M defines a set of input/output sequences: L M (s 1 ) Two FSMs are equivalent if they define the same set of input/output sequences (the same regular languages)

SEBASE 2007 Test Coverage and Automated Test Generation

SEBASE 2007 Test coverage There are many popular notions of code coverage such as: Statement, Branch, MC/DC, LCSAJ, … It is natural to define measures of model coverage. For FSMs we have: –State coverage –Transition coverage

SEBASE 2007 Example (state) s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1 We could use input sequence aaba Finding shortest sequence is NP-hard Easy to automate test generation but … Gives us no confidence in the transitions not covered

SEBASE 2007 Example (transition) s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1 Here we can use babaabbbaaba Again, easy to automate generation but … We may not observe an incorrect final state of a transition. Example: last transition in above. Instead, we can check the final states of transitions.

SEBASE 2007 Distinguishing Sequences A distinguishing sequence is an input sequence that leads to a different output sequence for every state. Here e.g. aba s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1

SEBASE 2007 Unique Input/Output Sequences A UIO for state s is defined by an input sequence x such that the output from s in response to x is different from the output from any other state s’. UIO for s 2 : a/0 a/1 s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1

SEBASE 2007 Characterizing sets A set W of input sequences such that: –for every pair s, s’ of distinct states there is an input sequence in W that leads to different output sequences from s and s’. Note: –we can easily extend this to non-deterministic models.

SEBASE 2007 Relative merits If we have a distinguishing sequence then we can use this for every state Every (minimal) FSM has a characterization set but we may need to run multiple tests to check a transition Practitioners report that many real FSMs have (short) UIOs.

SEBASE 2007 Complexity results We can generate a characterizating set in O(n 2 ) time. The problem of deciding whether an FSM has a UIO/DS is PSACE-complete. Length of shortest UIO/DS can be exponential in terms of number of states of FSM. But, instead of a DS we can use an adaptive DS: –There is an O(n 2 ) algorithm that decides whether an FSM has an adaptive DS and, if it does, returns such an adaptive DS

SEBASE 2007 Work using Genetic Algorithms These have been used to search for UIOs. Two types of fitness function have been used: –A ‘precise’ approach in which the fitness function rewards ‘state splitting’ (partitioning of state space by input sequence) –An ‘efficient’ approach that rewards the inclusion of transitions that have ‘less common’ input/output pairs

SEBASE 2007 Results Both outperform random More ‘precise’ approach evaluated only on small FSMs but extended to enhance diversity of result (so more likely to find multiple UIOs for each state) More ‘efficient’ approach evaluated on larger FSMs but no consideration of diversity.

SEBASE 2007 Test generation based on coverage In order to test a transition t it is sufficient to: –Use a preamble to reach the start state of t –Apply the input of t –Check the final state of t (if required) –Return to the initial state using a postamble/reset We can do this for every transition and automate the process.

SEBASE 2007 Example To test transition (s 2,s 3,a/0) we could: –Apply a to reach s 2 –Apply input a from the transition –Apply the distinguishing sequence aba –Then reset s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1

SEBASE 2007 Efficient test generation We could follow a transition test by another transition test. We might produce one sequence to test all of the transitions, benefits including: –Fewer test inputs –Longer test sequence so more likely to find faults due to extra states.

SEBASE 2007 A simple approach The following produces a single sequence: –Start with the preamble and the test for a transition t 1. –Now choose another transition t 2 and move to its start state and then add a test for t 2. –Repeat until we have included tests for every transition. How do we choose a best order in which to do this?

SEBASE 2007 Representing a transition test For transition (s 5,s 3,b/0) using distinguishing sequence aba we can add an extra edge: –From s 5 –Input baba –To s 1 s1s1 s2s2 s3s3 s4s4 s5s5 a/0 a/1 b/0 b/1

SEBASE 2007 Solving the optimisation problem Our problem can be seen as: –find a shortest sequence that contains every ‘extra’ edge. This is an instance of the (NP-hard) Rural Postman Problem. There is an algorithm that is optimal if: –There is a reset to be tested; or –Every state has a self-loop This approach has been implemented in tools.

SEBASE 2007 Overlap The Rural Postman approach produces separate tests for the transitions and connects these. However, the transition tests might overlap. There are algorithms/heuristics that utilize this.

SEBASE 2007 Resets We may have to include resets in a test sequence. It has been found that resets: –Can be difficult to implement, possibly requiring human involvement and reconfiguration of a system. –Can make it less likely that faults due to additional states will be found. However, we can find a test sequence that has fewest resets – and can do so in polynomial time.

SEBASE 2007 A problem with coverage No guarantees: –Even if we have checked the final state of every transition we may fail to detect faulty implementations. This is because: –The methods to check states work in the model but might not work in the implementation. The (limited) empirical evidence suggests that: –These approaches are more effective than transition coverage –They often do not provide full fault coverage even if there are no additional states.

SEBASE 2007

Papers These include: A. V. Aho, A.T. Dahbura, D. Lee, and M. U. Uyar, 1991, An Optimization Technique for Protocol Conformance Test Generation Based on UIO Sequences and Rural Chinese Postman Tours, IEEE Trans. on Communications, 39, 11, pp R. M. Hierons, 2004, Using a minimal number of resets when testing from a finite state machine, Information Processing Letters, 90 6, pp M. Kapus-Kolar, 2007, Test as Collection of Evidence: An Integrated Approach to Test Generation for Finite State Machines, The Computer Journal, 50 3, pp K. Derderian, R. M. Hierons, M. Harman, and Q. Guo, 2006, Automated Unique Input Output sequence generation for conformance testing, The Computer Journal, 49 3, pp Q. Guo, R. M. Hierons, M. Harman, and K. Derderian, 2005, Constructing Multiple Unique Input/Output Sequences Using Metaheuristic Optimisation Techniques, IEE Proceedings – Software, 152 3, pp

SEBASE 2007 Fault Models for FSMs

SEBASE 2007 Fault Models A fault model is a set F of models such that: –The tester believes that the implementation behaves like some (unknown) element of F. Fault models allow us to reason about test effectiveness: –If the system under test passes a test suite T then it must be equivalent to one of the members of F that passes T. Similar to Test Hypotheses and mutation testing.

SEBASE 2007 Test generation using fault models The aim is: –Produce a test suite T such that no faulty member of F passes T. If our assumption is correct then: –If the implementation passes T then it must be correct So, testing can show the absence of bugs (relative to a fault model).

SEBASE 2007 Fault models for FSMs The standard fault model is: –The set F m of FSMs with the same input and output alphabets as the specification/model M and no more than m states, some predefined m. A test suite is a checking experiment if it determines correctness relative to F m. A checking experiment is a checking sequence if it contains only one sequence.

SEBASE 2007 Generating a checking experiment There are algorithms for producing a checking experiment using a characterization set or UIOs: –Given fault model F m and FSM M with n states, these are exponential (or worse) in n-m. There are polynomial time algorithms for producing a checking sequence if: –our FSM M has a known distinguishing sequence and m=n. However: –No known efficient algorithm for producing a shortest checking sequence and deciding whether there is one is PSPACE-complete –There is a polynomial algorithm for minimizing the number of resets when using a distinguishing sequence.

SEBASE 2007 Papers These include: T. S. Chow, 1978, Testing Software Design Modeled by Finite-State Machines. IEEE Trans. Software Eng 4 3, pp R. M. Hierons and H. Ural, 2006, Optimizing the Length of Checking Sequences, IEEE Trans. on Computers, 55 5, pp J Chen, R. M. Hierons, H. Ural, and H. Yenigun, 2005, Eliminating Redundant Tests in a Checking Sequence, 17th IFIP International Conference on Testing Communicating Systems (TestCom 2005), Montreal, Canada, LNCS volume 3502, pp

SEBASE 2007 Another FSM problem Homing sequence is an input sequence with the property that if we apply it when in an unknown state of our FSM then the output tells us the state at the end of this. Problem of finding shortest homing sequence is NP-hard However, we can construct homing sequences in O(n 2 ) time.

SEBASE 2007 Minimization/learning The following are NP-hard: –Minimize incompletely specified FSM –Learn minimal FSM from examples.

SEBASE 2007 Testing from Extended Finite State Machines

SEBASE 2007 Extended finite state machines FSMs with: –Memory (variables) –Inputs with parameters –Outputs with parameters –Guards on transitions Languages such as SDL and Statecharts have more features.

SEBASE 2007 Just like FSMs We can apply FSM based test techniques by doing one of the following: –expanding the data to give an FSM –abstracting to an FSM, possibly after partitioning.

SEBASE 2007 Testing from EFSMs One approach is: –Choose a test criterion –Find a set of paths through EFSM that satisfy the criterion –Generate an input sequence for each path. Note: –FSM techniques produce sequences that test control structure, we can add sequences for dataflow. There is a problem: we might choose infeasible paths.

SEBASE 2007 Testability transformations We could rewrite the EFSM so that: –all paths are feasible; or –there is a known set of feasible sufficient paths. In general, this problem is uncomputable but can be solved when: –All assignments and guards are linear Approach has been applied to real protocols (Uyar and co-authors).

SEBASE 2007 General case We can split states on the basis of: –Transition guards (preconditions) –Transition postconditions However: –Analysis requires us to reason about predicates –May lead to exponential increase in number of states. Framework has been described but little more.

SEBASE 2007 Estimating feasibility A transition can make a sequence infeasible through its guard. We might estimate how ‘difficult’ it is to satisfy a guard. Use the score for each transition to estimate the ‘feasibility’ of a sequence. This can direct a search towards ‘better’ sequences.

SEBASE 2007 Initial results Experiments with: –a simple function that estimates ‘feasibility’ –two EFSMs we get: –a correlation between estimate of feasibility and actual feasibility.

SEBASE 2007 Papers These include: –M. A. Fecko, M. U. Uyar, A. Y. Duale, P. D. Amer, 2003, A Technique to Generate Feasible Tests for Communications Systems with Multiple Timers, IEEE/ACM Trans. on Networking, 11 5, pp –A. Y. Duale and M. U. Uyar, 2004, A Method Enabling Feasible Conformance Test Sequence Generation for EFSM Models. IEEE Trans. on Computers, 53 5, pp –R. M. Hierons, T.-H. Kim, and H. Ural, 2004, On The Testability of SDL Specifications, Computer Networks, 44 5, pp

SEBASE 2007 Conclusions FSM models can assist test automation They can help us to reason about test effectiveness. However, there are many computationally hard problems.

SEBASE 2007 Questions?