Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Aided Verification 計算機輔助驗證 Model Checking (Part I) 模型檢驗 (一)

Similar presentations


Presentation on theme: "Computer Aided Verification 計算機輔助驗證 Model Checking (Part I) 模型檢驗 (一)"— Presentation transcript:

1 Computer Aided Verification 計算機輔助驗證 Model Checking (Part I) 模型檢驗 (一)
Pao-Ann Hsiung Department of Computer Science and Information Engineering National Chung Cheng University, Taiwan 熊博安 國立中正大學 資訊工程研究所 These slide contents are adapted from the slides of Professors Edmund Clarke and Thomas Henzinger.

2 Contents What is Model Checking? Formal System Modeling
Formal Specification

3 What is Model Checking? Cindy Crawford
Unfortunately, not that kind of model!!

4 Temporal Logic Model Checking
Model checking is an automatic verification technique for finite state concurrent systems. Developed independently by Clarke and Emerson and by Queille and Sifakis in early 1980’s. Specifications are written in propositional temporal logic. Verification procedure is an exhaustive search of the state space of the design.

5 Some Advantages of Model Checking
No proofs!!! Fast Counterexamples No problem with partial specifications Logics can easily express many concurrency properties

6 Main Disadvantage State Explosion Problem: Too many processes
Data Paths Much progress has been made on this problem recently!

7 Basic Temporal Operators
The symbol “p” is an atomic proposition, e.g. DeviceEnabled. Fp - p holds sometime in the future. Gp - p holds globally in the future. Xp - p holds next time. pUq - p holds until q holds.

8 Model of computation Microwave Oven Example ~ Start ~ Close ~ Heat
~ Error Start ~ Close ~ Heat Error ~ Start Close Heat ~ Error ~ Start Close ~ Heat ~ Error Start Close ~ Heat Error Start Close ~ Heat ~ Error Start Close Heat ~ Error

9 Temporal Logic The oven doesn’t heat up until the door is closed.
Not heat_up holds until door_closed (~ heat_up) U door_closed

10 Model Checking Problem
Let M be a state-transition graph. Let ƒ be the specification in temporal logic. Find all states s of M such that M, s ƒ. Efficient Algorithms: CE81, CES83

11 State Transition Graph
The EMC System Model Checker (EMC) Preprocessor Specification State Transition Graph 104 to 105 states True or Counterexamples

12 Breakthrough! Ken McMillan implemented our model checking algorithm using Binary Decision Diagrams in 1987. Now able to handle much larger examples!!

13 An Alternative Approach to Model Checking
Both the system and its specification are modeled as automata. These automata are compared to determine if the system behavior conforms to the specification. Different notions of conformance have been explored: Language Inclusion Refinement orderings Observational equivalence

14 Implementation and Specification
Mimp corresponds to the implementation: a b a b c Mspec corresponds to the specification: “event C must happen at least once”: c a, b a, b, c

15 The Behavior Conformance Problem
Given two automata Mimp and Mspec , check if L(Mimp )  L(Mspec ). If a sequence is accepted by Mimp, then it is also accepted by Mspec. This can be determined algorithmically.) L Need help with pics

16 Combating the State Explosion Problem
Binary Decision Diagrams can be used to represent state transition systems more efficiently. The partial order reduction can be used to reduce the number of states that must be enumerated. Other techniques for alleviating state explosion include: Abstraction. Compositional reasoning. Symmetry. Cone of influence reduction. Semantic minimization.

17 Model Checker Performance
Model checkers today can routinely handle systems with between 100 and 300 state variables. Systems with reachable states have been checked. By using appropriate abstraction techniques, systems with an essentially unlimited number of states can be checked.

18 Notable Examples- IEEE Futurebus+
In 1992 Clarke and his students at CMU used SMV to verify the IEEE Future+ cache coherence protocol. They found a number of previously undetected errors in the design of the protocol. This was the first time that formal methods have been used to find errors in an IEEE standard. Although the development of the protocol began in 1988, all previous attempts to validate it were based entirely on informal techniques.

19 Notable Examples-IEEE SCI
In 1992 Dill and his students at Stanford used Murphi to verify the cache coherence protocol of the IEEE Scalable Coherent Interface. They found several errors, ranging from uninitialized variables to subtle logical errors. The errors also existed in the complete protocol, although it had been extensively discussed, simulated, and even implemented. Needs symbols

20 Notable Examples-PowerScale
In 1995 researchers from Bull and Verimag used LOTOS to describe the processors, memory controller, and bus arbiter of the PowerScale multiprocessor architecture. They identified four correctness requirements for proper functioning of the arbiter. The properties were formalized using bisimulation relations between finite labeled transition systems. Correctness was established automatically in a few minutes using the CÆSAR/ ALDÉBARAN toolbox.

21 Notable Examples - HDLC
A High-level Data Link Controller was being designed at AT&T in Madrid in 1996. Researchers at Bell Labs offered to check some properties of the design using the FormalCheck verifier. Within five hours, six properties were specified and five were verified. The sixth property failed, uncovering a bug that would have reduced throughput or caused lost transmissions!

22 Notable Examples PowerPC 620 Microprocessor
Richard Raimi used Motorola’s Verdict model checker to debug a hardware laboratory failure. Initial silicon of the PowerPC 620 microprocessor crashed during boot of an operating system. In a matter of seconds, Verdict found a BIU deadlock causing the failure.

23 Notable Examples-Analog Circuits
In 1994, Bosscher, Polak, and Vaandrager won a best-paper award for proving manually the correctness of a control protocol used in Philips stereo components. In 1995, Ho and Wong-Toi verified an abstraction of this protocol automatically using HyTech. Later in 1995, Daws and Yovine used Kronos to check all the properties stated and hand proved by Bosscher, et al.

24 Notable Examples - ISDN/ISUP
The NewCoRe Project (89-92) was the first application of formal verification in a software project within AT&T. A special purpose model checker was used in the development of the CCITT ISDN User Part Protocol. Five “verification engineers” analyzed 145 requirements. A total of 7,500 lines of SDL source code was verified. 112 errors were found; about 55% of the original design requirements were logically inconsistent.

25 Notable Examples - Building
In 1995 the Concurrency Workbench was used to analyze an active structural control system to make buildings more resistant to earthquakes. The control system sampled the forces being applied to the structure and used hydraulic actuators to exert countervailing forces. A timing error was discovered that could have caused the controller to worsen, rather than dampen, the vibration experienced during earthquakes.

26 Model Checking Systems
There are many other successful examples of the use of model checking in hardware and protocol verification. The fact that industry (INTEL, IBM, MOTOROLA) is starting to use model checking is encouraging. Below are some well-known model checkers, categorized by whether the specification is a formula or an automaton.

27 Temporal Logic Model Checkers
The first two model checkers were EMC and Caesar. SMV is the first model checker to use BDDs. Spin uses the partial order reduction to reduce the state explosion problem for software systems. Verus, Kronos, and UPPAAL check properties of real-time systems. HyTech is designed for reasoning about hybrid systems.

28 Behavior Conformance Checkers
The Cospan/FormatCheck system is based on showing inclusion between w-automata. FDR checks refinement between CSP programs; recently, used to debug security protocols. The Concurrency Workbench can be used to determine if two systems are observationally equivalent.

29 Combination Checkers Berkeley’s HSIS combines model checking with language inclusion. Stanford’s STeP system combines model checking with deductive methods. VIS integrates model checking with logic synthesis and simulation. The PVS theorem prover has a model checker for model mu-calculus.

30 Directions for Future Research
Investigate the use of abstraction, compositional reasoning, and symmetry to reduce the state explosion problem. Develop methods for verifying parameterized designs. Develop practical tools for real-time and hybrid systems. Combine with deductive verification. Develop tool interfaces suitable for system designers.

31 Adapted from Tom Henzinger’s Slides
Model Checking Adapted from Tom Henzinger’s Slides

32 Model checking, narrowly interpreted:
Decision procedures for checking if a given Kripke structure is a model for a given formula of a modal logic.

33 Why is this of interest to us?
Because the dynamics of a discrete system can be captured by a Kripke structure. Because some dynamic properties of a discrete system can be stated in modal logics. Model checking = System verification

34 Model checking, generously interpreted:
Algorithms for system verification which operate on a system model (semantics) rather than a system description (syntax).

35 There are many different model-checking problems:
for different (classes of) system models for different (classes of) system properties

36 I |= S A specific model-checking problem is defined by more detailed
more abstract “implementation” (system model) “specification” (system property) “satisfies”, “implements”, “refines” (satisfaction relation)

37 Characteristics of system models which favor model checking over other verification techniques
ongoing input/output behavior (not: single input, single result) concurrency (not: single control flow) control intensive (not: lots of data manipulation)

38 Examples -control logic of hardware designs -communication protocols -device drivers !

39 Paradigmatic example: mutual-exclusion protocol
|| loop out: x1 := 1; last := 1 req: await x2 = 0 or last = 2 in: x1 := 0 end loop. loop out: x2 := 1; last := 2 req: await x1 = 0 or last = 1 in: x2 := 0 end loop. P2 P1

40 Model-checking problem
I |= S system model system property satisfaction relation

41 Model-checking problem
I |= S system model system property satisfaction relation

42 Important decisions when choosing a system model
-variable-based vs. event-based -interleaving vs. true concurrency -synchronous vs. asynchronous interaction -clocked vs. speed-independent progress -etc.

43 Particular combinations of choices yield
CSP Petri nets I/O automata Reactive modules etc.

44 While the choice of system model is important for ease of modeling in a given situation,
the only thing that is important for model checking is that the system model can be translated into some form of state-transition graph.

45 q1 a a,b b q2 q3

46 State-Transition Graphs Kripke Structures (KS)
Q set of states {q1,q2,q3} A set of observations {a,b}  Q  Q transition relation q1  q2 [ ]: Q  2A observation function [q1] = {a} K = (Q, A, , [ ])

47 Kripke Structure of Programs
repeat p := true; p := false; end Øp p

48 Mutual Exclusion KS N = noncritical, T = trying, C = critical N1,N2
turn=0 T1,N2 turn=1 T1,T2 C1,N2 C1,T2 N1,T2 turn=2 T1,T2 N1,C2 T1,C2 N = noncritical, T = trying, C = critical

49 The translation from a system description to a state-transition graph usually involves an exponential blow-up !!! e.g., n boolean variables  2n states This is called the “state-explosion problem.”

50 State-transition graphs are not necessarily finite-state, but they don’t handle well:
-recursion (need push-down models) -environment interaction (need game models) -process creation

51 Labeled Transition Systems (LTS)
Q set of states {q1,q2,q3} Act set of actions {a,b}  Q  Act  Q transition relation q1  q2 L = (Q, Act, )

52 Vending Machine LTS

53 Kripke Transition Systems
KTS = KS + LTS

54 Model-checking problem
I |= S system model system property satisfaction relation

55 Three important decisions when choosing system properties
operational vs. declarative: automata vs. logic may vs. must: branching vs. linear time prohibiting bad vs. desiring good behavior: safety vs. liveness The three decisions are orthogonal, and they lead to substantially different model-checking problems.

56 Safety vs. liveness Safety: something “bad” will never happen Liveness: something “good” will happen (but we don’t know when)

57 Safety vs. liveness for sequential programs
Safety: the program will never produce a wrong result (“partial correctness”) Liveness: the program will produce a result (“termination”)

58 Safety vs. liveness for sequential programs
induction on control flow Safety: the program will never produce a wrong result (“partial correctness”) Liveness: the program will produce a result (“termination”) well-founded induction on data

59 Safety vs. liveness for state-transition graphs
Safety: those properties whose violation always has a finite witness (“if something bad happens on an infinite run, then it happens already on some finite prefix”) Liveness: those properties whose violation never has a finite witness (“no matter what happens along a finite run, something good could still happen later”)

60 q1 a a,b b q2 q3 Run: q1  q3  q1  q3  q1  q2  q2  Trace: a  b  a  b  a  a,b  a,b 

61 State-transition graph S = ( Q, A, , [] )
Finite runs: finRuns(S)  Q* Infinite runs: infRuns(S)  Q Finite traces: finTraces(S)  (2A)* Infinite traces: infTraces(S)  (2A)

62 This is much easier. Safety: the properties that can be checked on finRuns Liveness: the properties that cannot be checked on finRuns (they need to be checked on infRuns)

63 Example: Mutual exclusion
It cannot happen that both processes are in their critical sections simultaneously. Safety

64 Example: Bounded overtaking
Whenever process P1 wants to enter the critical section, then process P2 gets to enter at most once before process P1 gets to enter. Safety

65 Example: Starvation freedom
Whenever process P1 wants to enter the critical section, provided process P2 never stays in the critical section forever, P1 gets to enter eventually. Liveness

66 q1 a a,b b q2 q3 infRuns  finRuns  closure

67 For state-transition graphs, all properties are safety properties !

68 Example: Starvation freedom
Whenever process P1 wants to enter the critical section, provided process P2 never stays in the critical section forever, P1 gets to enter eventually. Liveness

69 q1 a a,b b q2 q3 Fairness constraint: the green transition cannot be ignored forever

70 q1 a a,b b q2 q3 Without fairness: infRuns = q1 (q3 q1)* (q2)  (q1 q3) With fairness: infRuns = q1 (q3 q1)* (q2)

71 Two important types of fairness
1 Weak (Buchi) fairness: a specified set of transitions cannot be enabled forever without being taken 2 Strong (Streett) fairness: a specified set of transitions cannot be enabled infinitely often without being taken

72 q1 a a,b b q2 q3 Strong fairness

73 a q1 a,b q2 Weak fairness

74 Weak fairness is sufficient for asynchronous models
Weak fairness is sufficient for asynchronous models (“no process waits forever if it can move”). Strong fairness is necessary for modeling synchronous interaction (rendezvous). Strong fairness makes model checking more difficult.

75 Fair state-transition graph S = ( Q, A, , [], WF, SF)
WF set of weakly fair actions SF set of strongly fair actions where each action is a subset of 

76 Fairness changes only infRuns, not finRuns. 
Fairness can be ignored for checking safety properties.

77 Two remarks The vast majority of properties to be verified are safety. While nobody will ever observe the violation of a true liveness property, fairness is a useful abstraction that turns complicated safety into simple liveness.

78 Three important decisions when choosing system properties
operational vs. declarative: automata vs. logic may vs. must: branching vs. linear time prohibiting bad vs. desiring good behavior: safety vs. liveness The three decisions are orthogonal, and they lead to substantially different model-checking problems.

79 Branching vs. linear time
Branching time: something may (or may not) happen (e.g., every req may be followed by grant) Linear time: something must (or must not) happen (e.g., every req must be followed by grant)

80 One is rarely interested in may properties,
but certain may properties are easy to model check, and they imply interesting must properties. (This is because unlike must properties, which refer only to observations, may properties can refer to states.)

81 Fair state-transition graph S = ( Q, A, , [], WF, SF )
Finite runs: finRuns(S)  Q* Infinite runs: infRuns(S)  Q Finite traces: finTraces(S)  (2A)* Infinite traces: infTraces(S)  (2A)

82 Linear time: the properties that can be checked on infTraces
Branching time: the properties that cannot be checked on infTraces

83 Linear Branching Safety finTraces finRuns Liveness infTraces infRuns

84 a a a a a b c b c Same traces, different runs

85 Observation a may occur. || It is not the case that a must not occur.
Linear

86 We may reach an a from which we must not reach a b .
Branching

87 a a a a a b c b c Same traces, different runs (different trace trees)

88 Linear time is conceptually simpler than branching time (words vs
Linear time is conceptually simpler than branching time (words vs. trees). Branching time is often computationally more efficient. (Because branching-time algorithms can work with given states, whereas linear-time algorithms often need to “guess” sets of possible states.)

89 Three important decisions when choosing system properties
operational vs. declarative: automata vs. logic may vs. must: branching vs. linear time prohibiting bad vs. desiring good behavior: safety vs. liveness The three decisions are orthogonal, and they lead to substantially different model-checking problems.

90 Logics Linear Branching Safety SafeTL Liveness LTL CTL

91 Automata Safety: finite automata Liveness: omega automata Linear: language containment Branching: simulation

92 Automata Safety: finite automata Liveness: omega automata Linear: language containment for word automata Branching: language containment for tree automata

93 System property: 2x2x2 choices
-safety (finite runs) vs. liveness (infinite runs) -linear time (traces) vs. branching time (runs) -logic (declarative) vs. automata (executable)

94 Defining a logic Syntax: What are the formulas? 2. Semantics: What are the models? Does model M satisfy formula  ? M |= 

95 Propositional logics:
1. boolean variables (a,b) & boolean operators (,) 2. model = truth-value assignment for variables Propositional modal (e.g. temporal) logics: & modal operators (,) 2. model = set of (e.g. temporally) related prop. models observations state-transition graph (“Kripke structure”)

96 CTL (Computation Tree Logic)
-safety & liveness -branching time -logic [Clarke & Emerson; Queille & Sifakis 1981]

97  ::= a |    |   |   |  U  | 
CTL Syntax  ::= a |    |   |   |  U  |  boolean operators boolean variable (atomic observation) modal operators

98 fair state-transition graph
CTL Model ( K, q ) fair state-transition graph state of K

99 CTL Semantics (K,q) |= a iff a  [q] (K,q) |=    iff (K,q) |=  and (K,q) |=  (K,q) |=  iff not (K,q) |=  (K,q) |=   iff exists q’ s.t q  q’ and (K,q’) |=  (K,q) |=  U  iff exist q0, ..., qn s.t q = q0  q1  ...  qn for all 0  i < n, (K,qi) |=  (K,qn) |= 

100 CTL Semantics (K,q) |=   iff exist q0, q1, ... s.t. 1. q = q0  q1  ... is an infinite fair run 2. for all i  0, (K,qi) |= 

101 Defined modalities (safety)
 EX exists next   =  AX forall next U EU exists until   = true U  EF exists eventually   =    AG forall always W =  ( () U (  )) AW forall waiting-for (forall weak-until)

102 Defined modalities (liveness)
 EG exists always   =  AF forall eventually W = ( U )  ( ) U  = ( W )  ()

103 Important safety properties
Invariance  a Sequencing a W b W c W d = a W (b W (c W d))

104 Important safety properties: mutex protocol
Invariance   (in_cs1  in_cs2) Sequencing  ( req_cs1  in_cs2 W in_cs2 W in_cs2 W in_cs1 )

105 Branching properties Deadlock freedom   true Possibility  (a   b)  (req_cs1   in_cs1)

106 Important liveness property
Response  (a   b)  (req_cs1   in_cs1)

107 If only universal properties are of interest,
why not omit the path quantifiers?

108 LTL (Linear Temporal Logic)
-safety & liveness -linear time -logic [Pnueli 1977; Lichtenstein & Pnueli 1982]

109 LTL Syntax  ::= a |    |   |   |  U 

110 LTL Model infinite trace t = t0 t1 t2 ...

111 Language of deadlock-free state-transition graph K at state q :
L(K,q) set of infinite traces of K starting at q (K,q) |=  iff for all t  L(K,q), t |=  (K,q) |=  iff exists t  L(K,q), t |= 

112 LTL Semantics t |= a iff a  t0 t |=    iff t |=  and t |=  t |=  iff not t |=  t |=   iff t1 t2 ... |=  t |=  U  iff exists n  0 s.t for all 0  i < n, ti ti |=  tn tn |= 

113 Defined modalities  X next U U until   = true U  F eventually   =    G always W = ( U )   W waiting-for (weak-until)

114 Important properties Invariance  a   (in_cs1  in_cs2) Sequencing a W b W c W d  ( req_cs1  in_cs2 W in_cs2 W in_cs2 W in_cs1 ) Response  (a   b)  (req_cs1   in_cs1)

115 Composed modalities  a infinitely often a  a almost always a

116 Where did fairness go ?

117 Unlike in CTL, fairness can be expressed in LTL !
So there is no need for fairness in the model. Weak (Buchi) fairness :   (enabled   taken )  (enabled  taken) Strong (Streett) fairness : (  enabled )  (  taken )

118 Starvation freedom, corrected
 (in_cs2  out_cs2)   (req_cs1   in_cs1)

119 CTL cannot express fairness
 a    a  b    b q1 q0 q2 a a b

120 LTL cannot express branching
Possibility  (a   b) So, LTL and CTL are incomparable. (There are branching logics that can express fairness, e.g. CTL* = CTL + LTL, but they lose the computational attractiveness of CTL.)


Download ppt "Computer Aided Verification 計算機輔助驗證 Model Checking (Part I) 模型檢驗 (一)"

Similar presentations


Ads by Google