Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Synthesis of Test Purpose Directed Reactive Planning Tester for Nondeterministic Systems.

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Chapter 5: Tree Constructions
Algorithms (and Datastructures) Lecture 3 MAS 714 part 2 Hartmut Klauck.
AI Pathfinding Representing the Search Space
Representing Boolean Functions for Symbolic Model Checking Supratik Chakraborty IIT Bombay.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
Introduction to Computer Science 2 Lecture 7: Extended binary trees
Planning based on Model Checking Dept. of Information Systems and Applied CS Bamberg University Seminar Paper Svetlana Balinova.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
CSC 423 ARTIFICIAL INTELLIGENCE
The Shortest Path Problem. Shortest-Path Algorithms Find the “shortest” path from point A to point B “Shortest” in time, distance, cost, … Numerous.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Planning under Uncertainty
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
CSE332: Data Abstractions Lecture 16: Topological Sort / Graph Traversals Tyler Robison Summer
1 Spanning Trees Lecture 20 CS2110 – Spring
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Shortest Path Problems Directed weighted graph. Path length is sum of weights of edges on path. The vertex at which the path begins is the source vertex.
CSE 780 Algorithms Advanced Algorithms Graph Algorithms Representations BFS.
25/06/2015Marius Mikucionis, AAU SSE1/22 Principles and Methods of Testing Finite State Machines – A Survey David Lee, Senior Member, IEEE and Mihalis.
Tirgul 13. Unweighted Graphs Wishful Thinking – you decide to go to work on your sun-tan in ‘ Hatzuk ’ beach in Tel-Aviv. Therefore, you take your swimming.
1 Efficient Discovery of Conserved Patterns Using a Pattern Graph Inge Jonassen Pattern Discovery Arwa Zabian 13/07/2015.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
Solving problems by searching
Backtracking.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
Shortest Paths Introduction to Algorithms Shortest Paths CSE 680 Prof. Roger Crawfis.
Dijkstra's algorithm.
Jüri Vain Dept. of Computer Science Tallinn University of Technology J.Vain “...Reactive Planning Testing...”Abo, Feb 3,
Distributed Constraint Optimization Michal Jakob Agent Technology Center, Dept. of Computer Science and Engineering, FEE, Czech Technical University A4M33MAS.
Nirmalya Roy School of Electrical Engineering and Computer Science Washington State University Cpt S 223 – Advanced Data Structures Graph Algorithms Shortest-Path.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Computer Science 112 Fundamentals of Programming II Graph Algorithms.
1 WEEK 9-10 Graphs II Unweighted Shortest Paths Dijkstra’s Algorithm Graphs with negative costs Acyclic Graphs Izmir University of Economics.
CDC related research at the Dept. of Computer Science TUT Jüri Vain CDC Workshop - Tallinn, Jan , 2008.
Algorithms and Algorithm Analysis The “fun” stuff.
Prims’ spanning tree algorithm Given: connected graph (V, E) (sets of vertices and edges) V1= {an arbitrary node of V}; E1= {}; //inv: (V1, E1) is a tree,
Inferring Synchronization under Limited Observability Martin Vechev, Eran Yahav, Greta Yorsh IBM T.J. Watson Research Center (work in progress)
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
CSE 2331 / 5331 Topic 12: Shortest Path Basics Dijkstra Algorithm Relaxation Bellman-Ford Alg.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
Lecture 3: Uninformed Search
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
CSCE350 Algorithms and Data Structure Lecture 19 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Foundation of Computing Systems
© 2006 Pearson Addison-Wesley. All rights reserved 14 A-1 Chapter 14 Graphs.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Solving problems by searching A I C h a p t e r 3.
NETWORK FLOWS Shruti Aggrawal Preeti Palkar. Requirements 1.Implement the Ford-Fulkerson algorithm for computing network flow in bipartite graphs. 2.For.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Teooriapäevad, Vanaõue, Sept Synthesis of Test Purpose Directed Reactive Planning Tester for Nondeterministic Systems 1 Jüri Vain, Kullo Raiend,
Software Engineering (CSI 321)
The minimum cost flow problem
Instructor: Rajeev Alur
Courtsey & Copyright: DESIGN AND ANALYSIS OF ALGORITHMS Courtsey & Copyright:
COMP 6/4030 ALGORITHMS Prim’s Theorem 10/26/2000.
A* Path Finding Ref: A-star tutorial.
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Outline This topic covers Prim’s algorithm:
Discrete Controller Synthesis
CSE 373: Data Structures and Algorithms
EE5900 Advanced Embedded System For Smart Infrastructure
Chapter 14 Graphs © 2011 Pearson Addison-Wesley. All rights reserved.
Presentation transcript:

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Synthesis of Test Purpose Directed Reactive Planning Tester for Nondeterministic Systems Jüri Vain Dept. of Computer Science Tallinn University of Technology

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Lecture plan  Preliminaries Model-Based Testing Online testing  Reactive Planning Tester (RPT)  Constructing the RPT  Performance of the approach  Demo

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Context: Model-Based Testing  Given a specification model and an Implementation Under Test (IUT),  Find whether the IUT conforms to the specification.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Model-Based Testing  The specification needs to be formalised. We assume models are given as Extended Finite State Machines XTA...

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Online testing  Denotes test generation and execution algorithms that compute successive stimuli at runtime directed by  the test purpose and  the observed outputs of the IUT

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08

Online testing  Advantages: The state-space explosion problem is reduced because only a limited part of the state-space needs to be kept track of at any point in time.  Drawbacks: Exhaustive planning is diffcult due to the limitations of the available computational resources at the time of test execution.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Online testing: spectrum of methods  Random walk (RW): select test stimuli in random inefficient - based on random exploration of the state space leads to test cases that are unreasonably long may leave the test purpose unachieved  RW with reinforcement learning (anti-ant) the exploration is guided by some reward function   Exploration with exhaustive planning MC provides possibly an optimal witness trace the size of the model is critical in explicit state MC state explosion in "combination lock" or deep loop models ???

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Online testing: spectrum of methods  Random walk (RW): select test stimuli in random inefficient - based on random exploration of the state space leads to test cases that are unreasonably long may leave the test purpose unachieved  RW with reinforcement learning (anti-ant) the exploration is guided by some reward function  Planning with limited horizon!  Exploration with exhaustive planning MC provides possibly an optimal witness trace the size of the model is critical in explicit state MC state explosion in "combination lock" or deep loop models

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08

Reactive Planning  Instead of a complete plan with branches, a set of decision rules is derived  The rules direct the system towards the planning goal.  Just one subsequent input is computed at every step, based on the current context.  Planning horizon can be adjusable

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Reactive Planning [Brian C. Williams and P. Pandurang Nayak, 96 and 97]  A Reactive Planning works in 3 phases: Mode identification (MI) Mode reconfiguration (MR) Model-based reactive planning (MRP)  MI and MR set up the planning problem identifying initial and target states  MRP generates a plan

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Reactive Planning Tester  MI – Where are we? Observe the output of the IUT to determine the current mode (state of the model)  MR – Where do we want to go? Determined by still unsatisfied subgoals  MRP – How do we get there? Gain guards choose the the next transition with the shortest path to the next subgoal

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Reactive Planning Tester  Key assumptions: Testing is guided by the (EFSM) model of the tester and the test purpose Stimulae to the IUT are tester outputs generated by model execution Responses from the IUT are inputs to the tester model Decision rules of reactive planning are encoded in the guards of the transitions of the tester model The rules are constructed by offline analysis based on the given IUT model and the test purpose.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 The Model  The IUT model is presented as an output observable nondeterministic EFSM in which all paths are feasible  Algorithm of making EFSM feasible [Duale, 2004]

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Example: Nondeterministic EFSM s2s2 s3s3 e 6 : i 6 /o 6 e 7 : i 7 /o 7 s1s1 e 0 : i 0 /o 0 e 1 : i 0 /o 1 e 2 : i 2 /o 2 e 3 : i 3 /o 3 e 4 : i 3 /o 4 e 5 : i 5 /o 5 i 0 and i 3 are output observable nondeterministic inputs

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Encoding the Test Purpose in IUT Model  Trap - a boolean variable assignment attached to the transitions of the IUT model  A trap variable is initially set to false.  The trap update functions are executed (set to true) when the transition is visited.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Add Test Purpose s2s2 s3s3 e 6 : i 6 /o 6, t 6 =true e 7 : i 7 /o 7, t 7 =true s1s1 e 0 : i 0 /o 0 t 0 =true e 1 : i 0 /o 1, t 1 =true e 2 : i 2 /o 2, t 2 =true e 3 : i 3 /o 3, t 3 =true e 4 : i 3 /o 4, t 4 =true e 5 : i 5 /o 5, t 5 =true bool t 0 = false;... bool t 7 = false;

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Model of the tester  Generated from the IUT model decorated with test purpose  Transition guards encode the rules of online planning  2 types of tester states: active – tester controls the next move passive – IUT controls the next move  2 types of transitions: Observable – source state is a passive state (guard  true), Controllable – source state is an active state (guard  p S /\ p T where p S – guard of the IUT transition; p T – gain guard) The gain guard (defined on trap variables) must ensure that only the outgoing edges with maximum gain are enabled in the given state.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Construction of the Tester s2s2 s3s3 e 6 : i 6 /o 6, t 6 =true e 7 : i 7 /o 7, t 7 =true s1s1 e 0 : i 0 /o 0 t 0 =true e 1 : i 0 /o 1, t 1 =true e 2 : i 2 /o 2, t 2 =true e 3 : i 3 /o 3, t 3 =true e 4 : i 3 /o 4, t 4 =true e 5 : i 5 /o 5, t 5 =true s1s1 sasa s2s2 sbsb scsc s3s3 sdsd sese sfsf

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Add IO and Gain Guards s1s1 s4s4 s2s2 s5s5 s6s6 s3s3 s6s6 s9s9 s7s7 e o 0 : o 0 /t 0 =true e o 1 : o 1 /t 1 =true e c 01 : [p c 01 (T)] -/i 0 e o 2 : o 2 /t 2 =true e o 7 : o 7 /t 7 =true e c 2 : [p c 2 (T)] -/i 2 e c 34 : [p c 34 (T)] -/i 3 e o 3 : o 3 /t 3 =true e o 6 : o 6 /t 6 =true e c 6 : [p c 6 (T)] -/i 6 e c 7 : [p c 7 (T)] -/i 7 e c 5 : [p c 5 (T)] -/i 5 e o 5 : o 5 /t 5 =true

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the gain guards (GG): intuition  GG must guarantee that each transition enabled by GG is a prefix of some locally optimal (w.r.t. test purpose) path; tester should terminate after the test goal is reached or all unvisited traps are unreachable from the current state; to have a quantitative measure of the gain of executing any transition e we define a gain function g e that returns a distance weighted sum of unsatisfied traps that are reachable along e.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Examples of swarm intelligence: Collective Hunting Strategies Benefits of Collective Hunting Maximizing prey localization Minimizing prey catching effort

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the gain guards: the gain function  g e = 0, if it is useless to fire the transition e from the current state with the current variable bindings;  g e > 0, if fireing the transition e from the current state with the current variable bindings visits or leads closer to at least one unvisited trap;  g ei > g ej for transitions e i and e j with the same source state, if taking the transition e i leads to unvisited traps with smaller distance than taking the transition e j ;  Having gain function g e with given properties define GG: p T  (g e = max k g ek ) and g e > 0

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the Gain Functions: shortest path trees  Reachability problem of trap labelled transitions can be reduced to single-source shortest path problem.  Arguments of the gain function g e are Shortest path tree TR e with root node e V T – vector of trap variables  To construct TR e we create a dual graph G = (V D,E D ) of the tester where the vertices V D of G correspond to the transitions of the M T, the edges E D of G represent the pairs of subsequent transitions sharing a state in M T (2-switches)

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the Gain Guards: shortest path tree (example) The dual graph of the tester modelThe shortest-paths tree (left) and the reduced shortest-paths tree (right) from the transition e c 01

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the gain guards: the gain function  Represent the reduced tree TR(e i, G) as a set of elementary sub-trees each specified by the production i  j{1,..n} j  Rewrite the right-hand sides of the productions as arithmetic terms: (3) t  i - trap variable t i lifted to type N, c - constant for the scaling of the numerical value of the gain function, d( 0, i ) the distance between vertices 0 and i, where l - the number of hyper-edges on the path between 0 and i w j – weight of j-th hyperedge

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Constructing the gain guards: the gain function (continuation)  For each symbol i denoting a leaf vertex in TR(e,G) define a production rule (4)  Apply the production rules (3) and (4) starting from the root symbol 0 of TR(e,G) until all non- terminal symbols i are substituted with the terms that include only terminal symbols t  i and d( 0, i )

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Example: Gain Functions

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Example: Gain Guards

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Complexity of constructing and running the tester  The complexity of the synthesis of the reactive planning tester is determined by the complexity of constructing the gain functions.  For each gain function the cost of finding the TR E by breadth-first- search is O(|V D | + |E D |) [Cormen], where |V D | = |E T | - number of transitions of M T |E D | - number of transition pairs of M T (is bounded by |E S | 2 )  For all controllable transitions of the M T the upper bound of the complexity of the computations of the gain functions is O(|E S | 3 ).  At runtime each choice by the tester takes O(|E S | 2 ) arithmetic operations to evaluate the gain functions

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Experimental results: All Transitions Test Purpose

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Experimental Results: One Transition Test Purpose

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Demo: "combination lock"  Comparison of methods

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Summary  RP always drives the execution towards still unsatisfied subgoals.  Efficiency of planning: Number of rules that need to be evaluated at each step is relatively small (i.e., = the number of outgoing transitions of current state) The execution of decision rules is significantly faster than looking through all potential alternatives at runtime. Leads to the test sequence that is lengthwise close to optimal.

Doctoral course ’Advanced topics in Embedded Systems’. Lyngby'08 Questions?