Download presentation
Presentation is loading. Please wait.
1
1 Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham
2
2 Main Limitation of Model Checking: The state explosion problem The number of states in the system model grows exponentially with the number of variables the number of components in the system One solution to state explosion problem: Abstraction Another: Compositional Verification
3
3 Compositional Verification Inputs: –composite system M 1 ║ M 2 –property P Goal: check if M 1 ║ M 2 ² P First attempt: “divide and conquer” approach Problem: usually impossible to verify each component separately. –a component is typically designed to satisfy its requirements in specific environments (contexts)
4
4 Compositional Verification Assume-Guarantee (AG) paradigm: introduces assumptions representing a component’s environment Instead of: Does component satisfy property? Ask: Under assumption A on its environment, does the component guarantee the property? M : whenever M is part of a system satisfying the assumption A, then the system must also guarantee P
5
5 Useful AG Rule for Safety Properties 1.check if a component M 1 guarantees P when it is a part of a system satisfying assumption A. 2.discharge assumption: show that the remaining component M 2 (the environment) satisfies A. M 1 M 2 M 1 ║ M 2 M1M1 ║ M2M2 ² P A
6
6 Assume-Guarantee Crucial element: assumption A. Has to be strong enough to eliminate violations of P, but also general enough to reflect the environment M 2 appropriately. requires non-trivial human input in defining assumptions. How to automatically construct assumptions ? M 1 M 2 M 1 ║ M 2
7
7 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example
8
8 Labeled Transition Systems (LTS) Act – set of observable actions –LTSs communicate using observable actions τ – local \ internal action LTS M = (Q, q 0, αM, δ ) Q : finite non-empty set of states q 0 ∈ Q : initial state αM µ Act : observable actions δ µ Q x (αM [ {τ }) x Q : transition relation Alphabet
9
9 Labeled Transition Systems (LTS) in 012 send ack Trace of an LTS M : finite sequence of observable actions that M can perform starting at the initial state. L(M) = the Language of M : the set of all traces of M. Traces:,,…
10
10 Parallel Composition M 1 ║ M 2 Components synchronize on common observable actions (communication). The remaining actions are interleaved. M 1 = (Q 1,q 0 1,αM 1, δ 1 ), M 2 = (Q 2,q 0 2,αM 2, δ 2 ) M 1 ║ M 2 = (Q,q 0,αM, δ ) Q = Q 1 x Q 2 q 0 = (q 0 1, q 0 2 ) αM = αM 1 [ αM 2
11
11 Transition Relation δ Synchronization on a ∈ αM 1 Å αM 2 : (q 1,a,q 1 ’) ∈ δ 1 and (q 2,a,q 2 ’) ∈ δ 2 : ((q 1,q 2 ), a, (q 1 ’,q 2 ’)) ∈ δ Interleaving on a ∈ (αM \ (αM 1 Å αM 2 )) [ {τ} : (q 1, a, q 1 ’) ∈ δ 1 and a ∉ αM 2 : ((q 1,q 2 ), a, (q 1 ’,q 2 )) ∈ δ for any q 2 ∈ Q 2 (q 2, a, q 2 ’) ∈ δ 2 and a ∉ αM 1 : ((q 1,q 2 ), a, (q 1,q 2 ’)) ∈ δ for any q 1 ∈ Q 1
12
12 Example Input in 012 send ack Output out abc send ack ║
13
13 Input in 012 send ack Output out abc send ack ║ Input ║ Output in 0, a send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out 0 a 0, a 1 1, a 2,b b 2 c 2, c
14
14 Input in 012 send ack Output out abc send ack ║ Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out
15
15 Safety Properties Also expressed as LTSs, but of a special kind: Safety LTS : Deterministic: –Does not contain τ -transitions, and –every state has at most one outgoing transition for each action: (q,a,q’), (q,a,q’’) ∈ δ q’ = q’’
16
16 Safety Properties For a safety LTS, P: L(P) describes the set of legal (acceptable) behaviors over αP. M ² P iff ∀ σ ∈ L(M) : (σ αP) ∈ L(P) For Σ µ Act, σ Σ is the trace obtained from σ by removing all occurrences of actions a ∉ Σ. Example: {in, ack} = Language Containment L(M) αP µ L(P)
17
17 Example Order in out Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out |= ? Pref( *) ? { in,out } P
18
18 Model Checking M ² P Safety LTS P an Error LTS, P err : “traps” violations with special error state . P = (Q, q 0, αP, δ ) P err = (Q [ { }, q 0, αP, δ ’ ), where: δ ’ = δ [ {(q,a, ) | a ∈ αP and ∄ q’ ∈ Q: (q,a,q’) ∈ δ } Error LTS is complete. is a deadend state: has no outgoing transitions.
19
19 Example Order err in out Order in out
20
20 Model Checking M ² P Theorem: M ² P iff is unreachable in M ║ P err Remark: composition M ║ P err is defined as before with a small exception: If the target state of a transition in P err is , so is the target state of the corresponding transitions in the composed system M ║ P err In automata: M ² iff L(A M Å A ) = ;
21
21 Transition Relation δ Synchronization on a ∈ αM 1 Å αM 2 : (q 1,a,q 1 ’) ∈ δ 1 and (q 2,a,q 2 ’) ∈ δ 2 : ((q 1,q 2 ), a, (q 1 ’,q 2 ’)) ∈ δ Interleaving on a ∈ (αM \ (αM 1 Å αM 2 )) [ {τ} : (q 1, a, q 1 ’) ∈ δ 1 and a ∉ αM 2 : ((q 1,q 2 ), a, (q 1 ’,q 2 )) ∈ δ for any q 2 ∈ Q 2 (q 2, a, q 2 ’) ∈ δ 2 and a ∉ αM 1 : ((q 1,q 2 ), a, (q 1,q 2 ’)) ∈ δ for any q 1 ∈ Q 1 π π π π δ 1 : M, δ 2 : P err δ: M ║ P err
22
22 Example Order err in out Order in out ║ Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out
23
23 Example Order err in out Order in out ║ Input ║ Output 0, a in send ack 1, a 2, b 2, c out
24
24 ║ Input ║ Output 0, a insend ack 1, a 2, b 2, c out iii Order err in out Order in out 0,a,i in send ack 1,a,i2,b,i2,c,i out 0,a,i i in send ack 1,a,i i 2,b,i i 2,c,i i out unreachable 0,a,i2,c,i 1,a,i i 2,b,ii 0, a 1, a ii 0,a,i i
25
25 in ║ Input ║ Output 0, a insend ack 1, a 2, b 2, c out iii Order err in out Order in out 0,a,i in send ack 1,a,i2,b,i2,c,i out 0,a,i i in send ack 1,a,i i 2,b,i i 2,c,i i out in counter- example ! 0,a,i 1,a,i i
26
26 Assume Guarantee Reasoning Assumptions: also expressed as safety LTSs. M is true iff A ║ M ² P i.e. is unreachable in A ║ M ║ P err M 1 M 2 M 1 ║ M 2 A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P M 1, M 2 : LTSs P, A : safety LTSs
27
27 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example
28
28 Observation 1 AG Rule will return conclusive results with: Weakest assumption A w under which M 1 satisfies P: Under assumption A w on M 1 ‘s environment, M 1 satisfies P, i.e. A w ║ M 1 ² P Weakest: forall env. M’ 2 : M 1 ║ M’ 2 ² P IFF M’ 2 ² A w Given A w, to check if M 1 ║ M 2 ² P, check if M 2 meets the assumption A w. sufficient and necessary assumption on M 1 ’s env. A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P
29
29 How to Obtain A w ? Given module M 1, property P and M 1 ‘s interface with its environment Σ = (αM 1 [ αP) Å αM 2 : A w describes exactly all traces over Σ such that in the context of , M 1 satisfies P –Expressible as a safety LTS –Can be computed algorithmically Drawback of using A w : if the computation runs out of memory, no assumption is obtained. M 1 ║ M’ 2 ² P IFF M’ 2 ² A w
30
30 Observation 2 No need to use the weakest env. assumption A w. AG rule might be applicable with stronger (less general) assumption. Instead of finding A w : Use learning algorithm to learn A w. Use candidates A i produced by learning algorithm as candidate assumptions: try to apply AG rule with A i. A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P
31
31 Given a Candidate Assumption A i If A i ║ M 1 ² P does not hold: Assumption A i is not tight enough need to strengthen A i A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P
32
32 Given a Candidate Assumption A i Suppose A i ║ M 1 ² P holds: If M 2 ² A i also holds M 1 ║ M 2 ² P verified!. Otherwise: σ ∈ L(M 2 ) but (σ αΣ) ∉ L(A i ) -- cex P violated by M 1 in the context of cex=(σ αΣ) ? Yes: real violation ! M 1 ║ M 2 ² P falsified ! No: spurious violation. need to find better approximation of A w. “cex ║ M 1 ” | ≠ P ? A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P
33
33 “cex ║ M 1 ” ² P ? Compose M 1 with LTS A cex over αΣ cex=a 1,…a k A cex : Q={q 0,…,q k }, q 0 = q 0 δ ={ (q i,a i+1,q i+1 ) | 0 i < k } Model check A cex ║ M 1 ² P Is reachable in A cex ║ M 1 ║ P err ? Alternatively: simulate cex on M 1 ║ P err cex ║ M 1 ² P iff when cex is simulated on M 1 ║ P err it cannot lead to (error) state. yes no Real violation Spurious cex q0q0 a1 a1 a2a2 q1q1 qkqk ak ak … “cex ∊ L(A w )” ?
34
34 Assume-Guarantee Framework Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true false YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 Learning 1. A i ║ M 1 |= P real error? 2. M 2 |= A i cex ║ M 1 | P ? cex L(A i ) For A w : conclusive results guaranteed termination!
35
35 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example
36
36 Learning Algorithm for DFA – L* L*: by Angluin, improved by Rivest & Schapire learns an unknown regular language U produces a Deterministic Finite state Automaton (DFA) C such that L(C) = U DFA M = (Q, q 0, αM, δ, F) : Q, q 0, αM, δ : as in deterministic LTS F µ Q : accepting states L(M) = {σ | δ (q 0, σ) ∈ F}
37
37 Learning Algorithm for DFA – L* L* interacts with a Teacher to answer two types of questions: –Membership queries: is string σ in U ? –Conjectures: for a candidate DFA C i, is L(C i ) = U ? answers are (true) or (false + counterexample) Conjectures C 1, C 2, … converge to C Equivalenc e Queries
38
38 Learning Algorithm for DFA – L* Myhill-Nerode Thm. For every regular set U ⊆∑ * there exists a unique minimal deterministic automaton whose states are isomorphic to the set of equivalence classes of the following relation: w ≈ w’ iff ∀ u ∈∑ * : wu ∈ U ⇔ w’u ∈ U Basic idea: learn the equivalence classes Two prefixes are not in the same class iff there is a distinguishing suffix u
39
39 General Method L* maintains a table that records whether strings in belong to U –makes membership queries to update it Once all currently distinguishable strings are in different classes (table is closed), uses table to build a candidate C i -- makes a conjecture : –if Teacher replies true, done! –if Teacher replies false, uses counterexample to update the table
40
40 Learning Algorithm for DFA – L* Observation Table: (S, E, T) S 2 – prefixes (represent. of equiv. classes / states ) E 2 – suffixes (distinguishing) T: mapping (S ∪ S·∑) · E {true, false} ∈U∈U ∉U∉U E · e 1 e 2 … s1s1 true false S s 2... false …... S s1·as1·atrue false s1·b…s1·b… true … … s2·as2·a s2·b…s2·b… T :
41
41 T: (S ∪ S·∑) ·E {true, false} (S,E,T) is closed if ∀ s ∈ S ∀ a ∈ ∑ ∃ s’ ∈ S ∀ e ∈ E: T(sae)=T(s’e ) ∈U∈U ∉U∉U represents the next state from s after seeing a sa is undistinguishable from s’ by any of the suffixes L* Algorithm i.e. Every row sa of S has a matching row s’ inS E · e 1 e 2 … s1s1 true false S s 2... false …... S s1·as1·atrue false s1·b…s1·b… true … … s2·as2·a s2·b…s2·b…
42
42 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } sa has no matching row s’ inS
43
43 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } Candidate DFA from a closed (S,E,T): States: S Initial state: λ Accepting states: s ∈ S such that T(s) [= T(s ¢ λ)] = true Transition relation: δ(s,a) = s’ where ∀ e ∈ E: T(sae)=T(s’e) [ closed: ∀ s ∈ S ∀ a ∈ ∑ ∃ s’ ∈ S ∀ e ∈ E: T(sae)=T(s’e) ]
44
44 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? if C is correct return C else add e ∈ ∑* that witnesses the cex to E }
45
45 Choosing e e must be such that adding it to E will eliminate the current cex in the next conjecture. e is a suffix of cex that witnesses a difference between L(C i ) and U Adding e to E will cause an addition of a prefix (state) to S splitting states
46
46 Characteristics of L* Terminates with minimal automaton C for U Each candidate C i is smallest –any DFA consistent with table has at least as many states as C i –|C 1 | < | C 2 | < … < |C| Produces at most n candidates, where n = |C| # membership queries: O(kn 2 + n logm) –m: size of largest counterexample –k: size of ∑
47
47 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example
48
48 Assume-Guarantee with Learning Reminder: Use learning algorithm to learn A w. Use candidates produced by learning as candidate assumptions A i for AG rule. In order to use L* to produce assumptions A i : Show that L(A w ) is regular Translate DFA to safety LTS (assumption) Implement the teacher
49
49 L(A w ) is Regular A w is expressible as a safety LTS Translation of safety LTS A into DFA C: A = (Q, q 0, αM, ) C = (Q, q 0, αM, , Q) There exists a DFA C s.t. L(C) = L(A w ) L* can be used to learn a DFA that accepts L(A w )
50
50 DFA to Safety LTS DFAs C i returned by L* are complete, minimal and in this setting also prefix-closed: –if 2 L(C i ), then every prefix of is also in L(C i ). C i contains a single non-accepting state q nf and no accepting state is reachable from q nf. To get a safety LTS simply remove non- accepting state and all its ingoing transitions : C i = (Q [ {q nf }, q 0, αM, , Q) A i = (Q, q 0, αM, Å (Q £ αM £ Q) ) Weakest assumption is prefix-closed
51
51 Implementing the Teacher A w is unknown… Membership query: ∈ L(A w )? Check if “ ║ M 1 ” ⊨ P : –Model checking: is reachable in A ║ M 1 ║ P err ? –Simulation: is (error) state reachable when simulating on M 1 ║ P err ? Equivalence query: L(C i ) = L(A w ) ? –Translate C i into a safety LTS A i –Use it as candidate assumption for AG rule L(A w ) iff in context of , M 1 satisfies P or
52
52 P holds in M 1 ||M 2 Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 false cex ║ M 1 | P ? cex L(A i ) Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P violated in M 1 ||M 2 L(A i ) = L(A w ) ? false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P.
53
53 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 1. A i ║ M 1 |= P L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w )? cex L(A i ) cex L(A w ) false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P.
54
54 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P.
55
55 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P.
56
56 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P. L(A i ) ⊇ L(A w )?
57
57 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 | P ? cex L(A i ) A w contains all traces that can be composed with M 1 without violating P. cex L(A i ) cex L(A w ) L(A i ) ⊇ L(A w )?
58
58 Equivalence Query - Summary 2 applications of model checking + 1 model checking or simulation Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i true YN false ! ! cex
59
59 Equivalence Query - Summary If L(A i ) = L(A w ): AG rule returns conclusive results No need to continue with L* If L(A i ) ≠ L(A w ): Can terminate and abort L*. Otherwise, cex is returned in 1 if L(A i ) ⊈ L(A w ), or 2 if L(A i ) ⊊ L(A w ) Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i true YN false ! ! cex
60
60 Assume-Guarantee Framework Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true false YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 Learning 1. A i ║ M 1 |= P real error? 2. M 2 |= A i cex ║ M 1 | P ? cex L(A i )
61
61 Characteristics of Framework AG uses conjectures produced by L* as candidate assumptions A i L* uses AG as teacher L* terminates Framework guaranteed to terminate: –At latest terminates when A w is produced. –Possibly terminates before A w is produced!
62
62 Characteristics of Framework L* makes at most n = |A w | equivalence queries # membership queries: O(kn 2 + n logm) –m: size of largest counterexample –k: size of ∑ membership query Model checking X 1 equivalence query Model checking X 3 O(kn 2 + n logm) simulations + O(n) model checking applications. Each MC involves either M 1 or M 2 - not both! or simulation
63
63 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example
64
64 Output send ack out Order err in out in Example Input in ack send Check: Input ║ Output ² Order ? M1M1 M2M2 P (assumption’s alphabet) : { send, out, ack } Σ = (αM 1 [ αP) Å αM 2
65
65 Order err in out in Output send ack out Membership Queries Input in ack send E Table T S ? S ack out send S = set of prefixes E = set of suffixes L (A w )? Simulate on Input ║ Order err Σ = { send, out, ack } out
66
66 Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send Order err in out in Input in ack send in ack out Simulate on Input ║ Order err Σ = { send, out, ack } 0, a 1, b unreachable L (A w ) Output send ack out
67
67 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes L (A w )? Simulate on Input ║ Order err yes! Σ = { send, out, ack } E Table T S ? S ack out send true
68
68 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S ack ? out send true L (A w )? yes! Simulate on Input ║ Order err true
69
69 true Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S ack out ? send true Simulate on Input ║ Order err L (A w )?
70
70 Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send Order err in out in Input in ack send in ack out Simulate on Input ║ Order err Σ = { send, out, ack } reachable ∉ L (A w ) 0, a Output send ack out
71
71 Order err in out in Output send ack out Input in ack send Membership Queries Simulate on Input ║ Order err L (A w )? S = set of prefixes E = set of suffixes Σ = { send, out, ack } no! E Table T S ? S ack true out ? send true false
72
72 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S ack true outfalse sendtrue
73
73 Order err in out in Output send ack out Candidate Construction Input in ack send S = set of prefixes E = set of suffixes E Table T S true outfalse S ack out send out, ack out, out out, send true false true Σ = { send, out, ack }
74
74 Order err in out in Output send ack out Candidate Construction Input in ack send S = set of prefixes E = set of suffixes E Table T S true outfalse S ack out send out, ack out, out out, send true false true false Σ = { send, out, ack }
75
75 Order err in out in Output send ack out Candidate Construction Input in ack send ack send 2 states – non accepting state not added to assumption Assumption A 1 S = set of prefixes E = set of suffixes E Table T S true outfalse S ack out send out, ack out, out out, send true false true false Table closed ! Σ = { send, out, ack }
76
76 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input ² Order ack send Σ = { send, out, ack } Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out A 1 ║Input ≡ Input A 1 ║Input ║Order err 0, a 1, b 0, b 2, b
77
77 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input |= Order Counterexample: c = in,send,ack,in ack send Σ = { send, out, ack } Return to L*: c = send,ack L(A 1 )\L(A w ) 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out A 1 ║Input ║Order err 0, a 1, b 0, b 2, b
78
78 Order err in out in Output send ack out Membership Queries Input in ack send S = set of prefixes E = set of suffixes Σ = { send, out, ack } out E ' Table T ack S true outfalse S ack out send out, ack out, out out, send true false true false Return to L*: c = send,ack L(A 1 )\L(A w )
79
79 Order err in out in Output send ack out Membership Queries Input in ack send S = set of prefixes E = set of suffixes Σ = { send, out, ack } out E ' Table T ack S true outfalse S ack out send out, ack out, out out, send true false true false true false true
80
80 Order err in out in Output send ack out Input in ack send Σ = { send, out, ack } out Table T ack S true out false send true S ack out send out, ack out, out out,send send,ack send,out send,send true false true false true false true E ' false true
81
81 3 states – non accepting state not added to assumption Assumption A 2 Table closed ! Σ = { send, out, ack } Order err in out in Output send ack out Input in ack send out E ' Table T ack S true out false send true S ack out send out, ack out, out out,send send,ack send,out send,send true false true false true false true false ack send out, send
82
82 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input ² Order Counterexample: c = in,send,ack,in Return to L*: c = send,ack Step 1: A 2 ║Input ² Order True Step 2: Output ² A 2 True property Order holds on Input║Output ack send out, send A2:A2: Queries ack send Σ = { send, out, ack } L(A 1 )\L(A w ) c ∉ L(A 2 )
83
83 Output send ack out send Order err in out in Input in ack send Step 2: Output’ ² A 2 Counterexample: c= send,send,out Simulate c = send,send,out on Input║Order err real error? Σ = { send, out, ack } Another Example ack send out, send A2:A2:
84
84 Output send ack out send Order err in out in Input in ack send Step 2: Output’ ² A 2 Counterexample: c= send,send,out Simulate c = send,send,out on Input║Order err real error? Σ = { send, out, ack } Another Example Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out 0, a 1, b 2, b not a real error! unreachable
85
85 Return c to L* Output send ack out send Order err in out in Input in ack send Step 2: Output’ |=A 2 Counterexample: c= send,send,out Simulate c = send,send,out on Input║Order err real error? Another Example ack,out,send ack send out ack send A4:A4: property Order holds on Input║Output’ not a real error! AwAw L(A w )\L(A 2 ) Σ = { send, out, ack } unreachable
86
86 Conclusion Generate assumptions for assume-guarantee reasoning in an incremental and fully automatic fashion, using learning. Each iteration of assume-guarantee may conclude that the required property is satisfied or violated in the system. Assumption generation converges to an assumption that is necessary and sufficient for the property to hold in the specific system.
87
87 The End
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.