CS 267: Automated Verification Lecture 3: Fixpoints and Temporal Properties Instructor: Tevfik Bultan.

Slides:



Advertisements
Similar presentations
Partial Orderings Section 8.6.
Advertisements

CSCI 115 Chapter 6 Order Relations and Structures.
Symbolic Model Checking Revision Slides Dr. Eng. Amr T. Abdel-Hamid NETW 703 Winter 2012 Network Protocols Slides based on slides of: Jim Kurose, Keith.
CS 267: Automated Verification Lecture 2: Linear vs. Branching time. Temporal Logics: CTL, CTL*. CTL model checking algorithm. Counter-example generation.
Partial Order Reduction: Main Idea
Relations Relations on a Set. Properties of Relations.
(CSC 102) Discrete Structures Lecture 14.
CS 267: Automated Verification Lecture 7: SMV Symbolic Model Checker, Partitioned Transition Systems, Counter-example Generation in Symbolic Model Checking.
Copyright © Cengage Learning. All rights reserved. CHAPTER 5 SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION.
CS 267: Automated Verification Lectures 4:  -calculus Instructor: Tevfik Bultan.
Foundations of Data-Flow Analysis. Basic Questions Under what circumstances is the iterative algorithm used in the data-flow analysis correct? How precise.
1 Basic abstract interpretation theory. 2 The general idea §a semantics l any definition style, from a denotational definition to a detailed interpreter.
CSE 231 : Advanced Compilers Building Program Analyzers.
Programming Language Semantics Denotational Semantics Chapter 5 Part II.
Data Flow Analysis Compiler Design Nov. 3, 2005.
FSM Decomposition using Partitions on States 290N: The Unknown Component Problem Lecture 24.
1 Relations: The Second Time Around Chapter 7 Equivalence Classes.
¹ -Calculus Based on: “Model Checking”, E. Clarke and O. Grumberg (ch. 6, 7) “Symbolic Model Checking: 10^20 States and Beyond”, Burch, Clark, et al “Introduction.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs, Data-flow Analysis Data-flow Frameworks --- today’s.
Orderings and Bounds Parallel FSM Decomposition Prof. K. J. Hintz Department of Electrical and Computer Engineering Lecture 10 Update and modified by Marek.
Partial Orderings: Selected Exercises
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
CS 267: Automated Verification Lecture 13: Bounded Model Checking Instructor: Tevfik Bultan.
1 Partial Orderings Based on Slides by Chuck Allison from Rosen, Chapter 8.6 Modified by.
Partially Ordered Sets (POSets)
Lecture 9 Illustrations Lattices. Fixpoints Abstract Interpretation.
Sets, POSets, and Lattice © Marcelo d’Amorim 2010.
Chapter 6. Order Relations and Structure
Solving fixpoint equations
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
Chapter 2 Mathematical preliminaries 2.1 Set, Relation and Functions 2.2 Proof Methods 2.3 Logarithms 2.4 Floor and Ceiling Functions 2.5 Factorial and.
On Reducing the Global State Graph for Verification of Distributed Computations Vijay K. Garg, Arindam Chakraborty Parallel and Distributed Systems Laboratory.
1 IS 2150 / TEL 2810 Introduction to Security James Joshi Associate Professor, SIS Lecture 3 September 15, 2009 Mathematical Review Security Policies.
Sets, Relations, and Lattices
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Chapter 8: Relations. 8.1 Relations and Their Properties Binary relations: Let A and B be any two sets. A binary relation R from A to B, written R : A.
1 CSEP590 – Model Checking and Automated Verification Lecture outline for July 9, 2003.
Web Science & Technologies University of Koblenz ▪ Landau, Germany Models of Definite Programs.
Set Theory Concepts Set – A collection of “elements” (objects, members) denoted by upper case letters A, B, etc. elements are lower case brackets are used.
Department of Statistics University of Rajshahi, Bangladesh
1 Partial Orderings Based on Slides by Chuck Allison from Rosen, Chapter 8.6 Modified by.
Data Flow Analysis II AModel Checking and Abstract Interpretation Feb. 2, 2011.
Iterative Dataflow Problems Taken largely from notes of Alex Aiken (UC Berkeley) and Martin Rinard (MIT) Dataflow information used in optimization Several.
Abstraction and Abstract Interpretation. Abstraction (a simplified view) Abstraction is an effective tool in verification Given a transition system, we.
Partial Orderings: Selected Exercises
Chapter 6 Order Relations and Structures
Fixpoints and Reachability
CSE 2813 Discrete Structures
Models of Definite Programs
CSEP590 – Model Checking and Automated Verification
Unit-III Algebraic Structures
Partial Orderings CSE 2813 Discrete Structures.
Symbolic Model Checking Revision Slides
CTL model checking algorithms
Formal Verification: Modeling
Introductory Material
Discrete Math & Fixed points
Concurrent Models of Computation
Sungho Kang Yonsei University
Models of Definite Programs
Computer Security: Art and Science, 2nd Edition
Lecture 20: Dataflow Analysis Frameworks 11 Mar 02
Program correctness SAT and its correctness
교환 학생 프로그램 내년 1월 중순부터 6월 초 현재 학부 2,3 학년?
Models of Definite Programs
Relations: The Second Time Around
Foundations of Discrete Mathematics
Introductory Material
Presentation transcript:

CS 267: Automated Verification Lecture 3: Fixpoints and Temporal Properties Instructor: Tevfik Bultan

What is a Fixpoint (aka, Fixed Point) Given a function F : D  D x  D is a fixpoint of F if and only if F (x) = x

Temporal Properties  Fixpoints [Emerson and Clarke 80] Here are some interesting CTL equivalences: AG p = p  AX AG p EG p = p  EX EG p AF p = p  AX AF p EF p = p  EX EF p p AU q = q  (p  AX (p AU q)) p EU q = q  (p  EX (p EU q)) Note that we wrote the CTL temporal operators in terms of themselves and EX and AX operators

Functionals Given a transition system T=(S, I, R), we will define functions from sets of states to sets of states –F : 2 S  2 S For example, one such function is the EX operator (which computes the precondition of a set of states) –EX : 2 S  2 S which can be defined as: EX(p) = { s | (s,s’)  R and s’  p } Abuse of notation: I am using p to denote the set of states which satisfy the property p (i.e., the truth set of p)

Functionals Now, we can think of all temporal operators also as functions from sets of states to sets of states. For example: AX p =  EX(  p) or if we use the set notation AX p = (S - EX(S - p)) Abuse of notation: I will use the set and logic notations interchangeably. LogicSet p  q p  q p  q p  q  p S – p False  TrueS

Lattices The set of states of the transition system forms a lattice: lattice2 S partial order  bottom element  (alternative notation:  ) top elementS (alternative notation: T) Least upper bound (lub)  (aka join) operator Greatest lower bound (glb)  (aka meet) operator

Lattices In general, a lattice is a partially ordered set with a least upper bound operation and a greatest lower bound operation. Least upper bound a  b is the smallest element where a  a  b and b  a  b Greatest lower bound a  b is the biggest element where a  b  a and a  b  b A partial order is a reflexive (for all x, x  x), transitive (for all x, y, z, x  y  y  z  x  z), and antisymmetric (for all x, y, x  y  y  x  x = y) relation.

Complete Lattices 2 S forms a lattice with the partial order defined as the subset- or-equal relation and the least upper bound operation defined as the set union and the greatest lower bound operation defined as the set intersection. In fact, (2 S, , , S, ,  ) is a complete lattice since for each set of elements from this lattice there is a least upper bound and a greatest lower bound. Also, note that the top and bottom elements can be defined as:  =  =  { y | y  2 S } T = S =  { y | y  2 S } This definition is valid for any complete lattice.

An Example Lattice { , {0}, {1}, {2}, {0,1},{0,2},{1,2},{0,1,2}} partial order:  (subset relation) bottom element:  =  top element: {0,1,2} = T lub:  (union)glb:  (intersection) {0,1,2} = T (top element)  =  (bottom element) {0} {0,1}{1,2}{0,2} {2}{1} The Hasse diagram for the example lattice (shows the transitive reduction of the corresponding partial order relation)

Temporal Properties  Fixpoints Based on the equivalence EF p = p  EX EF p we observe that EF p is a fixpoint of the following function: F y = p  EX y (we can also write it as λ y. p  EX y ) F (EF p) = EF p In fact, EF p is the least fixpoint of F, which is written as: EF p =  y. F y =  y. p  EX y (  means least fixpoint)

EF p =  y. p  EX y Let’s prove this. First we have the equivalence EF p = p  EX EF p Why? Because according to the semantics of EF, EF p holds in a state either if p holds in that state, or if that state has a next state in which EF p holds. From this equivalence we know that EF p is a fixpoint of the function λ y. p  EX y and since the least fixpoint is the smallest fixpoint we have:  y. p  EX y  EF p

EF p =  y. p  EX y Next we need to prove that EF p   y. p  EX y to complete the proof. Suppose z is a fixpoint of λ y. p  EX y, then we know that z = p  EX z which means that EX z  z and this means that no path starting from a state that is outside of z can reach a state in z. Since we also have p  z, any path that can reach p must start with a state in z. Hence, we can conclude that EF p  z. Since we showed that EF p is contained in any fixpoint of the function λ y. p  EX y, we get EF p   y. p  EX y which completes the proof.

Temporal Properties  Fixpoints Based on the equivalence EG p = p  EX EG p we observe that EG p is a fixpoint of the following function: F y = p  EX y (we can also write it as λ y. p  EX y ) F (EG p) = EG p In fact, EG p is the greatest fixpoint of F, which is written as: EG p = y. F y = y. p  EX y ( means greatest fixpoint)

EG p = y. p  EX y Let’s prove this too. First we have the equivalence EG p = p  EX EG p Why? Because according to the semantics of EG, EG p holds in a state if and only if p holds in that state and if that state has a next state in which EG p holds. From this equivalence we know that EG p is a fixpoint of the function λ y. p  EX y and since the greatest fixpoint is the biggest fixpoint we have: EG p  y. p  EX y

EG p = y. p  EX y Next we need to prove that y. p  EX y  EG p to complete the proof. Suppose z is a fixpoint of λ y. p  EX y, then we know that z = p  EX z which means that z  p and z  EX z. Hence, p holds in every state in z and every state in z has a next state that is also in z. Therefore from any state that is in z, we can build a path that starts at that state and on all states on that path p holds. This means that every state in z satisfy EG p, i.e., z  EG p. Since we showed that any fixpoint of λ y. p  EX y is contained in EG p, we get y. p  EX y  EG p which completes the proof.

Fixpoint Characterizations Fixpoint CharacterizationEquivalences AG p = y. p  AX y AG p = p  AX AG p EG p = y. p  EX y EG p = p  EX EG p AF p =  y. p  AX y AF p = p  AX AF p EF p =  y. p  EX y EF p = p  EX EF p p AU q =  y. q  (p  AX (y)) p AU q=q  (p  AX (p AU q)) p EU q =  y. q  (p  EX (y)) p EU q = q  (p  EX (p EU q)) All of these fixpoint characterizations can be proved based on the semantics of the temporal operators (like we did for EF p and EG p).

Monotonicity Function F is monotonic if and only if, for any x and y, x  y  F x  F y Note that, all the functions we used for representing temporal operators are monotonic: λ y. p  AX y λ y. p  EX y λ y. p  AX y λ y. p  EX y λ y. q  (p  AX (y)) λ y. q  (p  EX (y)) For all these functions, if you give a bigger y as input you will get a bigger result as output

Monotonicity One can define non-monotonic functions: For example: λ y. p  EX  y This function is not monotonic. If you give a bigger y as input you will get a smaller result. For the functions that are non-monotonic the fixpoint computation techniques we are going to discuss will not work. For such functions a fixpoint may not even exist. The functions we defined for temporal operators are all monotonic because there is no negation in front of the input variable y. In general, if you have an even number of negations in front of the input variable y, then you will get a monotonic function.

Least Fixpoint Given a monotonic function F, its least fixpoint exists, and it is the greatest lower bound (glb) of all the reductive elements :  y. F y =  { y | F y  y }

Let’s prove this property. Let us define z as z =  { y | F y  y } We will first show that z is a fixpoint of F and then we will show that it is the least fixpoint which will complete the proof. Based on the definition of z, we know that: for any y, F y  y, we have z  y. Since F is monotonic, z  y  F z  F y. But since F y  y, then F z  y. I.e., for all y, F y  y, we have F z  y. This implies that, F z   { y | F y  y }, and based on the definition of z, we get F z  z

 y. F y =  { y | F y  y } Since F is monotonic and since F z  z, we have F ( F z)  F z which means that F z  { y | F y  y }. Then by definition of z we get, z  F z Since we showed that F z  z and z  F z, we conclude that F z = z, i.e., z is a fixpoint of the function F. For any fixpoint of F we have F y = y which implies F y  y So any fixpoint of F is a member of the set { y | F y  y } and z is smaller than any member of the set { y | F y  y } since it is the greatest lower bound of all the elements in that set. Hence, z is the least fixpoint of F.

Computing the Least Fixpoint The least fixpoint  y. F y is the limit of the following sequence (assuming F is  -continuous): , F , F 2 , F 3 ,... F is  -continuous if and only if p 1  p 2  p 3  … implies that F (  i p i ) =  i F (p i ) If S is finite, then we can compute the least fixpoint using the sequence , F , F 2 , F 3 ,... This sequence is guaranteed to converge if S is finite and it will converge to the least fixpoint.

Computing the Least Fixpoint Given a monotonic and union continuous function F  y. F y =  i F i (  ) We can prove this as follows: First, we can show that for all i, F i (  )   y. F y using induction for i=0, we have F 0 (  ) =    y. F y Assuming F i (  )   y. F y and applying the function F to both sides and using monotonicity of F we get: F ( F i (  ))  F (  y. F y) and since  y. F y is a fixpoint of F we get: F i+1 (  )   y. F y which completes the induction.

Computing the Least Fixpoint So, we showed that for all i, F i (  )   y. F y If we take the least upper bound of all the elements in the sequence F i (  ) we get  i F i (  ) and using above result, we have:  i F i (  )   y. F y Now, using union-continuity we can conclude that F (  i F i (  )) =  i F ( F i (  )) =  i F i+1 (  ) =   i F i+1 (  ) =  i F i (  ) So, we showed that  i F i (  ) is a fixpoint of F and  i F i (  )   y. F y, then we conclude that  y. F y =  i F i (  )

Computing the Least Fixpoint If there exists a j, where F j (  ) = F j+1 (  ), then  y. F y = F j (  ) We have proved earlier that for all i, F i (  )   y. F y If F j (  ) = F j+1 (  ), then F j (  ) is a fixpoint of F and since we know that F j (  )   y. F y then we conclude that  y. F y = F j (  )

EF Fixpoint Computation EF p =  y. p  EX y is the limit of the sequence: , p  EX , p  EX(p  EX  ), p  EX(p  EX(p  EX  )),... which is equivalent to , p, p  EX p, p  EX (p  EX (p) ),...

EF Fixpoint Computation s2s1s4s3 p p Start  1 st iteration p  EX  = {s1,s4}  EX(  )= {s1,s4}   ={s1,s4} 2 nd iteration p  EX(p  EX  ) = {s1,s4}  EX({s1,s4})= {s1,s4}  {s3}={s1,s3,s4} 3 rd iteration p  EX(p  EX(p  EX  )) = {s1,s4}  EX({s1,s3,s4})= {s1,s4}  {s2,s3,s4}={s1,s2,s3,s4} 4 th iteration p  EX(p  EX(p  EX(p  EX  ))) = {s1,s4}  EX({s1,s2,s3,s4})= {s1,s4}  {s1,s2,s3,s4} = {s1,s2,s3,s4}

EF Fixpoint Computation p EF(p)states that can reach p p EX(p) EX(EX(p))... EF(p)  states that can reach p  p  EX(p)  EX(EX(p)) ... EF(p)

Greatest Fixpoint Given a monotonic function F, its greatest fixpoint exists and it is the least upper bound (lub) of all the extensive elements: y. F y =  { y | y  F y } This can be proved using a proof similar to the one we used for the dual result on least fixpoints

Computing the Greatest Fixpoint The greatest fixpoint y. F y is the limit of the following sequence (assuming F is  -continuous): S, F S, F 2 S, F 3 S,... F is  -continuous if and only if For any sequence p 1, p 2, p 3 … if p i+1  p i for all i, then F (  i p i ) =  i F (p i ) If S is finite, then we can compute the greatest fixpoint using the sequence S, F S, F 2 S, F 3 S, …This sequence is guaranteed to converge if S is finite and it will converge to the greatest fixpoint.

Computing the Greatest Fixpoint Given a monotonic and intersection continuous function F y. F y =  I F i (S) If there exists a j, where F j (S) = F j+1 (S), then y. F y = F j (S) Again, these can be proved using proofs similar to the ones we used for the dual results for the least fixpoint.

EG Fixpoint Computation Similarly, EG p = y. p  EX y is the limit of the sequence: S, p  EX S, p  EX(p  EX S), p  EX(p  EX (p  EX S)),... which is equivalent to S, p, p  EX p, p  EX (p  EX (p) ),...

EG Fixpoint Computation s2s1s4s3 pp p Start S = {s1,s2,s3,s4} 1 st iteration p  EX S = {s1,s3,s4}  EX({s1,s2,s3,s4})= {s1,s3,s4}  {s1,s2,s3,s4}={s1,s3,s4} 2 nd iteration p  EX(p  EX S) = {s1,s3,s4}  EX({s1,s3,s4})= {s1,s3,s4}  {s2,s3,s4}={s3,s4} 3 rd iteration p  EX(p  EX(p  EX S)) = {s1,s3,s4}  EX({s3,s4})= {s1,s3,s4}  {s2,s3,s4}={s3,s4}

EG Fixpoint Computation EG(p) EG(p) states that can avoid reaching  pp EX(p) EX(EX(p))... EG(p)  states that can avoid reaching  p  p  EX(p)  EX(EX(p)) ...