Memory-Efficient Algorithms for the Verification of Temporal Properties C. Courcoubetis Inst. Of Comp. Sci. FORTH, Crete, Greece M. Verdi IBM Almaden P.

Slides:



Advertisements
Similar presentations
Automatic Verification Book: Chapter 6. How can we check the model? The model is a graph. The specification should refer the the graph representation.
Advertisements

Graph Algorithms Algorithm Design and Analysis Victor AdamchikCS Spring 2014 Lecture 11Feb 07, 2014Carnegie Mellon University.
Informed search algorithms
Function Technique Eduardo Pinheiro Paul Ilardi Athanasios E. Papathanasiou The.
Graphs - II Algorithms G. Miller V. Adamchik CS Spring 2014 Carnegie Mellon University.
Single Source Shortest Paths
CS 267: Automated Verification Lecture 8: Automata Theoretic Model Checking Instructor: Tevfik Bultan.
1 Model checking. 2 And now... the system How do we model a reactive system with an automaton ? It is convenient to model systems with Transition systems.
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
22C:19 Discrete Structures Induction and Recursion Fall 2014 Sukumar Ghosh.
The Dictionary ADT Definition A dictionary is an ordered or unordered list of key-element pairs, where keys are used to locate elements in the list. Example:
Bayesian Networks, Winter Yoav Haimovitch & Ariel Raviv 1.
Determinization of Büchi Automata
Graphs III (Trees, MSTs) (Chp 11.5, 11.6)
Theory of Computing Lecture 6 MAS 714 Hartmut Klauck.
CS 267: Automated Verification Lecture 10: Nested Depth First Search, Counter- Example Generation Revisited, Bit-State Hashing, On-The-Fly Model Checking.
1 Temporal Claims A temporal claim is defined in Promela by the syntax: never { … body … } never is a keyword, like proctype. The body is the same as for.
Tirgul 8 Graph algorithms: Strongly connected components.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Formal Methods in SE Qaisar Javaid Assistant Professor Lecture # 11.
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Fall 2003Costas Busch - RPI1 Decidability. Fall 2003Costas Busch - RPI2 Recall: A language is decidable (recursive), if there is a Turing machine (decider)
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
Validating Streaming XML Documents Luc Segoufin & Victor Vianu Presented by Harel Paz.
Minimal Spanning Trees. Spanning Tree Assume you have an undirected graph G = (V,E) Spanning tree of graph G is tree T = (V,E T E, R) –Tree has same set.
Tirgul 7. Find an efficient implementation of a dynamic collection of elements with unique keys Supported Operations: Insert, Search and Delete. The keys.
COMP 171 Data Structures and Algorithms Tutorial 10 Hash Tables.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Lecture 10: Search Structures and Hashing
Tirgul 13 Today we’ll solve two questions from last year’s exams.
Model Checking Lecture 5. Outline 1 Specifications: logic vs. automata, linear vs. branching, safety vs. liveness 2 Graph algorithms for model checking.
More Trees COL 106 Amit Kumar and Shweta Agrawal Most slides courtesy : Douglas Wilhelm Harder, MMath, UWaterloo
Graph Algorithms Using Depth First Search Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms.
C o n f i d e n t i a l HOME NEXT Subject Name: Data Structure Using C Unit Title: Graphs.
1 CSCI 2400 section 3 Models of Computation Instructor: Costas Busch.
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
 2004 SDU Lecture 7- Minimum Spanning Tree-- Extension 1.Properties of Minimum Spanning Tree 2.Secondary Minimum Spanning Tree 3.Bottleneck.
David Luebke 1 10/25/2015 CS 332: Algorithms Skip Lists Hash Tables.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Graphs. 2 Graph definitions There are two kinds of graphs: directed graphs (sometimes called digraphs) and undirected graphs Birmingham Rugby London Cambridge.
Discussion #32 1/13 Discussion #32 Properties and Applications of Depth-First Search Trees.
Graphs.
CMSC 341 B- Trees D. Frey with apologies to Tom Anastasio.
Symbol Tables and Search Trees CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Lecture1 introductions and Tree Data Structures 11/12/20151.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Hashing 8 April Example Consider a situation where we want to make a list of records for students currently doing the BSU CS degree, with each.
Ihab Mohammed and Safaa Alwajidi. Introduction Hash tables are dictionary structure that store objects with keys and provide very fast access. Hash table.
1 Chapter 22: Elementary Graph Algorithms II. 2 About this lecture Depth First Search DFS Tree and DFS Forest Properties of DFS Parenthesis theorem (very.
Hashing 1 Hashing. Hashing 2 Hashing … * Again, a (dynamic) set of elements in which we do ‘search’, ‘insert’, and ‘delete’ n Linear ones: lists, stacks,
Properties and Applications of Depth-First Search Trees and Forests
Chapter 8 Properties of Context-free Languages These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata,
 2004 SDU 1 Lecture5-Strongly Connected Components.
1 Closures of Relations Based on Aaron Bloomfield Modified by Longin Jan Latecki Rosen, Section 8.4.
5. Biconnected Components of A Graph If one city’s airport is closed by bad weather, can you still fly between any other pair of cities? If one computer.
8/3/2007CMSC 341 BTrees1 CMSC 341 B- Trees D. Frey with apologies to Tom Anastasio.
TIRGUL 10 Dijkstra’s algorithm Bellman-Ford Algorithm 1.
Tirgul 12 Solving T4 Q. 3,4 Rehearsal about MST and Union-Find
Breadth-First Search (BFS)
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
CS2210:0001Discrete Structures Induction and Recursion
Automatic Verification
Graph Algorithms Using Depth First Search
Minimal Spanning Trees
CSE 421: Introduction to Algorithms
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Lectures on Graph Algorithms: searching, testing and sorting
CSCI-2400 Models of Computation.
3.2 Graph Traversal.
Presentation transcript:

Memory-Efficient Algorithms for the Verification of Temporal Properties C. Courcoubetis Inst. Of Comp. Sci. FORTH, Crete, Greece M. Verdi IBM Almaden P. Wolper Un. de Liege M. Yannakakis AT&T Bell Labs Presented By: Prateem Mandal

Outline of the talk Introduction and previous work Formal Problem Definition Analysis and critique of previous approaches The Algorithms Questions and Answers

Introduction and previous work The paper addresses the problem of designing memory-efficient algorithms for verification of temporal properties of finite state programs modeled as Buchi Automata. Thus the problem is that of checking the emptiness of the automata. Allowing programs to err, the paper gives algorithm with memory access size of O(n) bits.

Previous work Reachability Analysis. Theorem Prover. Model Checking. The problem of state space explosion was solved with the use of hashing (Holtzmann technique). This paper furthers the technique to find bad cycles as opposed to bad state in Holtzmann’s technique.

Formal Problem Definition

Formal Problem Definition contd..

Characterization of memory requirements Memory requirements have been characterized by considering the data structures used by the algorithm. They are of two types: randomly accessed and sequentially accessed. Hash table needs randomly accessed memory while stack or queue needs sequentially accessed memory.

Characterization of memory requirements contd.. Bottleneck in performance of verification algorithms is related to amount of randomly accessed memory usage. “Holtzmann observed that there is a tremendous speed-up for an algorithm implemented so that its randomly accessed memory requirements do not exceed the main memory available in the system(since sequentially accessed memory can be implemented in secondary storage)”

The Basic Method Holzmann considered how to perform reach ability analysis by using least amount of random memory access. The method is basically a DFS with marking of states by using a hashed m bit array. Since collision detection is avoided, there is a possibility that a state will be missed.

The Basic Method contd.. The key assumption here is that one can choose a large enough value of m so that the collision events become arbitrary small. Holzmann claims that table size m=O(n) where n is the number of reachable states.

Analysis of the Basic Method Is the claim true? Let |U| be the namespace of the states where |U|>>n. If we consider the case of complete reach ability analysis the requirement is m=O(nlog |U|). Why?

Analysis of Basic Method contd.. From probabilistic point of view the total possible mappings from set S={1,…,n} to states {1,…,m} are m n and number of possible one to one mappings are m!/(m-n)! Which for n<<m can be approximated by e -n 2 /m. Thus partial reach ability can be achieved in m=O(n log n) by first mapping n reachable states to set {1,…,m} with m=O(n 2 ) and then do complete reachability by assuming namespace to be of size m.

Analysis of Basic Method contd.. Holtzmann goes a step further and uses m=O(n). The assumption here is there exists a hashing function that can work in this constraint with arbitrarily small collision. This however is not supported by the above analysis. The above assumption can only hold if the hashing function exploits some particular structure of state space U. This assumption is not general enough to apply to algos finding strongly connected components.

The Algorithms S is a stack which stores the path from root to the present node. Q is a queue to hold the reachable members of F in post order. The above two data structures are sequential in nature therefore inconsequential to the analysis of the algorithm. M is a bit array indexed by hash values 1,…,m and is used for marking states.

Algorithm A: part 1 1.Initialize: S:=[s 0 ], M:=0, Q:=null; 2.Loop: while S != null do begin v:=top(S); if M[h(w)]=1 for all w belonging to succ(v) then begin pop v from S; if v belong to F insert v into Q; end else begin let w be the first member of succ(v) with M[h(w)]=0; M[h(w)]:=1; push w into S; end

Algorithm A: part 2 1.Initialize: S:=null, M:=0. 2.Loop: while Q=null do begin: f:=head(Q); remove f from Q; push f into S; while s != null do begin v:=top(S); if f belongs to succ(v) then halt and return “YES”; if M[h(w)]:=1 for all w belong to succ(v) then pop v from S else begin let w be the first member of succ(v) with m[h(w)]=0; M[h(w)]:=1; push w into S end

Lemma 1: Let f 1,…,f k be the members of Q after the first DFS, i.e., the members of F that are reachable from s 0 in post order (f 1 is the first member of F to be reached in post order, f k the last). If for some pair f i, f j with i<j there is a path from f i to f j then node f i belongs to a non trivial strongly connected component.

Proof of Lemma 1 Suppose there is a path from f i to f j. If no node on this path was marked before f i, then the DFS would have reached f j from f i in the post order. Therefore some node p in the path was marked before f i. If p comes before f i in the post order then f j should come before f i in the post order. Since p was marked before f i but comes after f i in the post order, it must be an ancestor of f i. Thus f i can reach an ancestor and thus belongs to a non trivially strongly connected component.

Theorem 1: If the second DFS halts and returns “YES”, then some reachable node of F belongs to a non trivial strongly connected component. Conversely, suppose that some reachable node of F belongs to a non trivial strongly connected component, then the second DFS will return “YES”.

Proof of Theorem 1 Suppose second DFS returns “YES” then it is building a tree with root f j and discovers a back edge to root f j, and therefore f j is in a cycle. For the converse let f j be the smallest indexed(j) reachable member of F that belongs to a non trivial strongly connected component. Consider a path p from f j to itself. If p is reachable from a f i with a smaller i then f i will also reach f j which by Lemma 1 contradicts the choice of f j. Thus no marked p exists when f j is pushed and thus back edge will be found eventually.

Algorithm B 1.Initialize S 1 :=[s0], S 2 :=null, M 1 :=M 2 :=0. 2.While S 1 !=null do begin x:=top(S 1 ); if there is a y in succ(x) with M 1 [h(y)]=0 then begin let y be the first such number of succ(x); M 1 [h(y)]:=1; push y into S 1 ; end else begin pop x from S 1 ; if x belongs to F then begin push x into S 2 ;  end

Algorithm B contd..  while S 2 !=null do begin v:=top(S 2 ); if x belongs to succ(v) then ret “YES” if M 2 [h(w)]=1 for all w belong to succ(v) then pop v from S 2 else begin let w be the first member of succ(v) with M 2 [h(w)]=0; m 2 [h(w)]:=1; push w into S 2 ; end

Algorithm B contd.. The above algorithm requires twice as much space as algorithm A. If an automaton is found to be non-empty an accepted word can be extracted from stacks S 1 and S 2. In verification terms if a protocol is incorrect, the incorrect path can be reproduced. Both the algorithms may err due to collisions which means miss some error but will never proclaim a right protocol as wrong so they essentially behave like debuggers.

Questions and Answers