Download presentation
Presentation is loading. Please wait.
1
1 Completeness and Complexity of Bounded Model Checking
2
2 Bounded Model Checking k = 0 BMC(M, ,k) yes k++ k ¸ ?k ¸ ? no
3
3 How big should k be? For every model M and LTL property there exists k s.t. M ² k ! M ² We call the minimal such k the Completeness Threshold ( CT ) Clearly if M ² then CT = 0 Conclusion: computing CT is at least as hard as model checking
4
4 The Completeness Threshold Computing CT is as hard as model checking The value of CT depends on the model M the property . First strategy: find over-approximations to CT based on graph theoretic properties of M
5
5 Diameter d(M) = longest shortest path between any two reachable states. Recurrence Diameter rd(M) = longest loop- free path between any two reachable states. d(M) = 2 rd(M) = 3 Initialized Diameter d I (M) Initialized Recurrence Diameter rd I (M) Basic notions…
6
6 The Completeness Threshold Theorem: for p properties CT = d(M) s0s0 pp Arbitrary path Theorem: for } p properties CT= rd(M)+1 s0s0 pp pp pp pp pp Theorem: for an LTL property CT = ?
7
7 Generating the BMC formula (Based on the Vardi-Wolper algorithm) Buchi automata B: h S,S 0, ,F,L i Let inf(W) be the set of states visited infinite no. of times by a run W B accepts W iff there exists f 2 F s.t. inf(W) Å f ;
8
8 LTL model checking Given M, , construct a Buchi automaton B LTL model checking: is : M £ B empty? Emptiness checking: is there a path to a loop with an accepting state ? s0s0
9
9 “Unroll” k times Find a path to a loop that satisfies, in at least one of its states, one of F states. …that is, one of the states in the loop satisfies s0s0 Generating the BMC formula (Based on the Vardi-Wolper algorithm)
10
10 s0s0 Generating the BMC formula Initial state: k transitions: Closing a cycle with an accepting state: sksk slsl One of the states in the loop Satisfies one of F states Closing the loop
11
11 Completeness Threshold for LTL It cannot be longer than rd I ( )+1 It cannot be longer than d I ( ) + d( ) Result: min(rd I ( )+1, d I ( ) + d( )) s0s0
12
12 CT: examples d I ( ) + d( ) = 6 rd I ( ) + 1= 4 d I ( ) + d( ) = 2 rd I ( ) + 1= 4 s0s0 s0s0
13
13 Computing CT (diameter) Computing d ( ) symbolically with QBF: find minimal k s.t. for all i, j, if j is reachable from i, it is reachable in k or less steps. k-long path s 0 -- s k+1 Complexity: 2-exp k+1-long path s 0 -- s k+1
14
14 Computing CT (diameter) Computing d( ) explicitly: Generate the graph Find shortest paths (O| | 3 ) (‘Floyd-Warshall’ algorithm) Find longest among all shortest paths O(| | 3 ) exp 3 in the size of the representation of Why is there a complexity gap (2-exp Vs. exp 3 )? QBF tries in the worst case all paths between every two states. Unlike Floyd-Warshall, QBF does not use transitivity information like:
15
15 Computing CT (recurrence diameter) Finding the longest loop-free path in a graph is NP- complete in the size of the graph. The graph can be exponential in the number of variables. Conclusion: in practice computing the recurrence diameter is 2-exp in the no. of variables. Computing rd(y) symbolically with SAT. Find largest k that satisfies:
16
16 Complexity of BMC CT · (min(rd I ( )+1, d I ( ) + d( ))) Computing CT is 2exp. The value of CT can be exponential in the # of state variables. BMC SAT formula grows linearly with k, which can be as high as CT. Conclusion: standard SAT based BMC is worst- case 2-exp
17
17 The complexity GAP SAT based BMC is 2-exp LTL model checking is exponential in | | and linear in | M | (to be accurate, it is ‘Pspace-complete’ in | |) So why use BMC ? Finding bugs when k is small In many cases rd( ) and d( ) are not exponential and are even rather small. SAT, in practice, is very efficient.
18
18 Closing the complexity gap Why is there a complexity gap ? LTL-MC with 2-dfs : dfs1 dfs2 Every state is visited not more than twice
19
19 The Double-DFS algorithm DFS1(s) { push(s,Stack1); hash(s,Table1); for each t 2 Succ (s) {if t Table1 then DFS1(t);} if s 2 F then DFS2(s); pop(Stack1); } DFS2(s) { push(s,Stack2); hash( s,Table2) ; for each t 2 Succ (s) do { if t is on Stack1 { output(“bad cycle:”); output( Stack1,Stack2,t); exit; } else if t Table2 then DFS2(t) } pop( Stack2); } Upon finding a bad cycle, Stack1, Stack2, t, determines a counterexample: a bad cycle reached from an init state.
20
20 Closing the complexity gap 2-dfs Each state is visited not more than twice SAT Each state can potentially be visited an exponential no. of times, because all paths are explored.
21
21 Closing the complexity gap (for p) Force a static order, following a forward traversal Each time a state i is fully evaluated (assigned): Prevent the search from revisiting it through deeper paths e.g. If ( x i Æ : y i ) is a visited state, then for i < j · CT add the following state clause: ( : x j Ç y j ) When backtracking from state i, prevent the search from revisiting it in step i (add ( : x i Ç y i )). If : p i holds stop and return “Counterexample found”
22
22 Work in progress Challenges: Formally prove that the restricted version is 1-exp. Remove requirement of static order, and stay 1- exp. Extend to full LTL How to combine logic minimization and template clauses Implementation & experiments
23
23 Closing the complexity gap Restricted SAT-BMC for LTL (/symbolic 2-dfs) Force a static order, following a forward traversal Each time a state i is fully evaluated (assigned): Prevent the search from revisiting it through deeper paths, e.g. If ( x i Æ : y i ) is a visited state, then for i < j · CT add the following state clause: ( : x j Ç y j ). We denote this clause by Sc i j When backtracking, from state i, prevent the search from revisiting it in step i (add ( : x i Ç y i )). Let last-accepting[i] = index of the last accepting state · i If a conflict arises in step j due to a state-clause SC i j s.t. i · last-accepting[j-1] and SC i i is satisfied, Return (“counterexample found”)
24
24 Closing the complexity gap Is restricted SAT better or worse than BMC ? Bad news: We gave up the main power of SAT: dynamic splitting heuristics. We may generate an exponential no. of added constraints Good news Single exp. instead of double exp. No need to compute CT. (Instead of pre-computing CT we can maintain a list of states and add their negation ‘when needed’).
25
25 Closing the complexity gap Is restricted SAT better or worse than explicit LTL-MC ? Not clear ! Unlike dfs, SAT has heuristics for progressing. SAT has pruning ability of sets of states
26
26 Comparing the algorithms… 2-dfs LTL MCRestricted-SAT BMC SAT - BMC TimeEXPEXP 2 2-EXP Memory*EXPEXP 2 EXP GuidanceNoneRestrictedFull PruningStatesSets of states * Assuming the SAT solver restricts the size of its added clauses
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.