Download presentation
Presentation is loading. Please wait.
1
© 2007 Carnegie Mellon University Optimized L*-based Assume-Guarantee Reasoning Sagar Chaki, Ofer Strichman March 27, 2007
2
2 Motivation: Reasoning by Decomposition Let M 1 and M 2 be two NFAs Let p be a property expressed as an NFA Is L(M 1 £ M 2 ) µ L(p) ? (Our notation: M 1 £ M 2 ² p) Q: What if this is too hard to compute ? A: Decompose
3
3 Assume-Guarantee Reasoning An Assume-Guarantee rule: M 1 and M 2 are NFAs with alphabets 1 and 2 This rule is sound and complete For ¹ being trace containement, simulation etc. There always exists such an assumption A (e.g. M 2 ) Need to find A such that M 1 £ A is easier to compute than M 1 £ M 2 A £ M 1 ¹ p M 2 ¹ A ---------------- M 1 £ M 2 ¹ p A £ (M 1 £ : p) ¹ ? M 2 ¹ A --------------------- (M 1 £ : p) £ M 2 ¹ ? ≡
4
4 Learning the Assumption Q: How can we find such an assumption A ? A: Learn it with L * The L* algorithm is by Angluin [87] Later improved by Rivest & Schapire [93] – this is what we use.
5
5 Positive feedback L* TeacherUnknown regular Language U L* s 2 U ? L (A) = U ? DA A s.t. L ( A ) = U yes/no yes U) Membership query Candidate query No. 2 L (A) - U Negative feedback No. 2 U - L (A) L* finds the minimal A such that L(A) = U
6
6 L* So L* receives as input positive and negative feedbacks In each iteration, size of the automaton increases by at least one state. But does not necessarily eliminate the feedback in the current iteration. L* creates a sequence of DFAs A 1, A 2, …, until converging with A such that L(A) = U.
7
7 L* For all i |A i | < |A i+1 | Produces at most n candidates, where n = |A| A is minimal For any DFA B such that L(A) = L(B), |A| · |B|
8
8 Trying to distinguish between: L (M 1 £: p) L (M 2 ) M 1 £ M 2 ² p is the same as (M 1 £ : p) £ M 2 ² ? M 1 £ M 2 2 p L (M 1 £: p) L (M 2 ) M 1 £ M 2 ² p
9
9 On the way we can … L (M 1 £: p) L (M 2 ) A Find an assumption A such that L(M 2 )µ L(A)µ * - L(M 1 £ :p) Our HOPE: A is ‘simpler’ to represent than M 2 i.e., | M 1 £ :p £ A| << |M 1 £ :p £ M 2 | A is an ‘acceptable’ assumption. A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
10
10 How ? Learn the language U = * - L(M 1 £ :p) Well defined. We can construct a teacher for it. Membership query is answered by simulating on M 1 £ :p. L (M 1 £: p) L (M 2 ) A = U A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
11
11 How ? Learn the language U = * - L(M 1 £ :p) Well defined. We can construct a teacher for it. A counterexample to M 2 ² A is a real one: 2 L(M 2 – A) ! 2 L(M 1 £ :p £ M 2 ) L (M 1 £: p) L (M 2 ) A = U A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
12
12 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L (M 1 £: p) L (M 2 ) A A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ? Check 2 L(M 2 ) …
13
13 Check 2 L(M 2 ) … L* - when M 1 £ M 2 ² p L (M 1 £: p) L (M 2 ) A A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ? If yes, is a real counterexample. Otherwise …
14
14 L* - when M 1 £ M 2 ² p L (M 1 £: p) L (M 2 ) A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ? A L* receives a negative feedback: should be removed from A A matter of luck!
15
15 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L* receives a positive feedback: should be added to A Unless… L (M 1 £: p) L (M 2 ) A A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
16
16 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L* receives a positive feedback: should be added to A Unless… L (M 1 £: p) L (M 2 ) A A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
17
17 L* - when M 1 £ M 2 2 p A conjecture query: is A acceptable ? L* receives a positive feedback: should be added to A Therefore, check: 2 L(M 1 £ :p). If yes this is a real counterexample. L (M 1 £: p) L (M 2 ) A A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
18
18 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true Negative feedback L* N M 1 £ M 2 ² p Positive feedback N Y M 1 £ M 2 2 p ² M 2 false, ² M 1 £ : p Y
19
19 L* - membership queries # queries = (kn 2 + n log m) m is the size of the largest counterexample, k is the size of the alphabet, n the number of states in A. Minimizing the number of membership queries is one of the subjects of this work.
20
20 This work In this work we improve the A-G framework with three optimizations 1. Feedback reuse: reduce the number of candidate queries 2. Lazy Learning: reduce the number of membership queries 3. Incremental Alphabet - reduce the size of A, the number of membership queries and conjectures. As a result: reduced overall verification time of component-based systems. We will talk in details about the third optimization only
21
21 Optimization 2: Lazy Learning Current method: 1. Learn A 2. Check if A £ (M 1 £ :p) ² ? (M 1 £ :p) is external to the learner. Learner interacts with (M 1 £ :p) only through membership-queries. A £ (M 1 £ : p) ² ? M 2 ² A ---------------- (M 1 £ : p) £ M 2 ² ?
22
22 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove from L (A) Learning with L* N M 1 £ M 2 ² p add to L (A) N Y M 1 £ M 2 2 p ² M 2 false, ² M 1 £ : p Y
23
23 Optimization 2: Lazy Learning Our ‘Lazy’ method: learner uses information about (M 1 £ :p) to reduce the number of membership queries. In particular: do not consider transitions that cannot synchronize with (M 1 £ :p).
24
24 Optimization 2: Lazy Learning Claim: The sequence of assumptions is the same as in L*. Saving: dropping membership queries for which we can compute in advance their result.
25
25 Optimization 3: Incremental Alphabet Choosing = ( 1 [ p ) \ 2 always works We call ( 1 [ p ) \ 2 the “full interface alphabet” But there may be a smaller that also works We wish to find a small such using iterative refinement Start with = ; Is the current adequate ? — no – update and repeat — yes – continue as usual
26
26 Optimization 3: incremental alphabet Claim: removing letters from the global alphabet, over-approximates the product. Example: If = {a,b}then ‘bb’ L (A £ B) If = {b}then ‘bb’ 2 L (A £ B) a bb £ AB L (M) Decreased
27
27 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove from L (A) Learning with L* N M 1 £ M 2 ² p add to L (A) N Y M 1 £ M 2 2 p ² M 2 false, ² M 1 £ : p Y
28
28 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove from L (A) Learning with L* ( ) = ; N M 1 £ M 2 ² p add to L (A) N Y M 1 £ M 2 2 p ² M 2 false, ² M 1 £ : p Y
29
29 Optimization 3: Check if ² M 1 £ :p We first check with full alphabet : A L (M 1 £: p) L (M 2 ) A L (M 1 £: p) L (M 2 ) A real counterexample!
30
30 Optimization 3: Check if ² M 1 £ :p We first check with full alphabet : Then with a reduced alphabet A : L (M 1 £: p) L (M 2 ) A A L (M 1 £: p) L (M 2 ) Positive feedback Proceed as usual
31
31 Optimization 3: Check if ² M 1 £ :p We first check with full alphabet : Then with a reduced alphabet A : L (M 1 £: p) L (M 2 ) A A L (M 1 £: p) L (M 2 ) No positive feedback is spurious Must refine A
32
32 There are various letters that we can add to A in order to eliminate . But adding a letter for each spurious counterexample is wasteful. Better: find a small set of letters that eliminate all the spurious counterexamples seen so far. Optimization 3: Refinement
33
33 So we face the following problem: “Given a set of sets of letters, find the smallest set of letters that intersects all of them.” This is a minimum-hitting-set problem. Optimization 3: Refinement
34
34 A naïve solution: Find for each counterexample the set of letters that eliminate it. — Explicit traversal of M 1 £ :p. Formulate the problem: “find the smallest set of letters that intersects all these sets” — A 0-1 ILP problem. Optimization 3: Refinement
35
35 Alternative solution: integrate the two stages. Formulate the problem: “find the smallest set of letters that eliminate all these counterexamples” — a 0-1 ILP problem Optimization 3: Incremental Alphabet
36
36 Let M 1 £ : p = Let = Introduce a variable for each state-pair: (p,x),(p,y),… Introduce choice variables A( ) and A( ) Initial constraint: (p,x) initial state always reachable Final constraint: :(r,z) final states must be unreachable pq r xy z Optimization 3: Incremental Alphabet
37
37 Let M 1 £ : p = Let = Some sample transitions: (p,x) ^ :A( ) ) (q,x) (p,x) ^ :A( ) ) (p,y) (q,x) ) (r,y) _(q,x) ^ :A( ) ) (r,x) ^(q,y) Find a solution that minimizes A( ) + A( ) In this case setting A( ) = A( ) = TRUE Updated alphabet = { , } pq r xy z Optimization 3: Incremental Alphabet
38
38 Experimental Results: Overall Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T3 12.22.037.54.51212519.712.320.023.820.110.520.5 25.05.2101.511.51243240.012.630.032.444.613.730.2 38.57.5163.028.01244449.114.535.345.648.915.635.5 413.010.5248.056.51246367.517.458.161.567.718.648.4 53.23.073.09.51213422.313.624.136.222.213.822.2 66.87.2252.036.512210330.624.229.0102.243.323.129.8 79.88.0328.852.512214044.427.843.9138.238.628.240.6 815.013.0443.077.512318373.637.167.9184.073.235.864.2 923.518.2568.0109.512323412144.1133.7236.2133.441.0109.3 1025.522.0689.5128.512329418948.4168.1297.0179.945.9169.7 Avg.10.89.2290.051.012211565.625.261.0115.767.224.657.1
39
39 Experimental Results: Optimization 3 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T3 12.22.037.54.51212519.712.320.023.820.110.520.5 25.05.2101.511.51243240.012.630.032.444.613.730.2 38.57.5163.028.01244449.114.535.345.648.915.635.5 413.010.5248.056.51246367.517.458.161.567.718.648.4 53.23.073.09.51213422.313.624.136.222.213.822.2 66.87.2252.036.512210330.624.229.0102.243.323.129.8 79.88.0328.852.512214044.427.843.9138.238.628.240.6 815.013.0443.077.512318373.637.167.9184.073.235.864.2 923.518.2568.0109.512323412144.1133.7236.2133.441.0109.3 1025.522.0689.5128.512329418948.4168.1297.0179.945.9169.7 Avg.10.89.2290.051.012211565.625.261.0115.767.224.657.1
40
40 Experimental Results: Optimization 2 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T3 12.22.037.54.51212519.712.320.023.820.110.520.5 25.05.2101.511.51243240.012.630.032.444.613.730.2 38.57.5163.028.01244449.114.535.345.648.915.635.5 413.010.5248.056.51246367.517.458.161.567.718.648.4 53.23.073.09.51213422.313.624.136.222.213.822.2 66.87.2252.036.512210330.624.229.0102.243.323.129.8 79.88.0328.852.512214044.427.843.9138.238.628.240.6 815.013.0443.077.512318373.637.167.9184.073.235.864.2 923.518.2568.0109.512323412144.1133.7236.2133.441.0109.3 1025.522.0689.5128.512329418948.4168.1297.0179.945.9169.7 Avg.10.89.2290.051.012211565.625.261.0115.767.224.657.1
41
41 Experimental Results: Optimization 1 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T3 12.22.037.54.51212519.712.320.023.820.110.520.5 25.05.2101.511.51243240.012.630.032.444.613.730.2 38.57.5163.028.01244449.114.535.345.648.915.635.5 413.010.5248.056.51246367.517.458.161.567.718.648.4 53.23.073.09.51213422.313.624.136.222.213.822.2 66.87.2252.036.512210330.624.229.0102.243.323.129.8 79.88.0328.852.512214044.427.843.9138.238.628.240.6 815.013.0443.077.512318373.637.167.9184.073.235.864.2 923.518.2568.0109.512323412144.1133.7236.2133.441.0109.3 1025.522.0689.5128.512329418948.4168.1297.0179.945.9169.7 Avg.10.89.2290.051.012211565.625.261.0115.767.224.657.1
42
42 Related Work NASA – original work – Cobleigh, Giannakopoulou, Pasareanu et al. Applications to simulation & deadlock Symbolic approach – Alur et al. Heuristic approach to optimization 3 – Gheorghiu
43
43 Some usage examples The general method described here was initiated by NASA to verify some safety-critical code. Some of their slides are recycled here… Our examples: OpenSSL – check the order of messages in the handshake step of the protocol. Linux driver – verify that acquire/release of locks is done in an order that prevents deadlocks. … looking eagerly for more
44
44 Ames Rover Executive Executes flexible plans for autonomy branching on state / temporal conditions Multi-threaded system communication through shared variables synchronization through mutexes and condition variables Several synchronization issues mutual exclusion data races properties specified by developer
45
45 Remove send,ack Membership queries: is ack 2 L (M 1 £ : p) ? is send 2 L (M 1 £ : p) ? is out 2 L (M 1 £ : p) ? is send,send 2 L (M 1 £ : p) ? is send,out 2 L (M 1 £ : p) ? # membership queries = O(#states ¢ || ¢ #experiments) ack send A1:A1: ack send out, send A2:A2: Membership Queries Example
46
46 ack send A1:A1: Remove send,ack ack send out, send A2:A2: Membership Queries Remove c= send,out Add c= send,send,out send ack,send send ack out A3:A3: ack,send ack send out ack send A4:A4: out
47
47 Learning with L* What happens in L* once a positive/negative trace are found ? L* finds a suffix of the trace that distinguishes A i from A from one of A i ’s states. This forces a split of the state. Need to re-compute all transitions. L* initiates experiments. The process continues until consistency with the experiments is achieved: this is A i+1.
48
48 L* - membership queries L* initiate its own set of experiments in order to make the assumption consistent with the experiments. For this, L* needs to be able to answer, for a given string s, “is s 2 L (M 1 £:p) ?” (-- why the asymmetry ?) This is called a membership query.
49
49 Optimization 2: Lazy Learning Suppose that L* computes the transitions from a new state s A. This state corresponds to a set of states in M 1, denoted by M 1 (s A ). Let 1 (s A ) be the set of transitions (= letters ) allowed from M 1 (s A ). Extend s A only with letters from 1 (s A ) Rather than from . Other letters go into an accepting sink-state.
50
50 L* - learning a regular language TeacherUnknown regular Language U L* Minimal FDA A s.t. L ( A ) = U s 2 U ? yes/no L = U ? yes/counterexample U) Our case is more problematic: U is not fixed.
51
51 counterexample – remove from L (A) counterexample – add to L (A) false + Learning with L* L (A) = ; Overall A-G with Learning Model Checking A £ M 1 ² p real error? M 2 ² A A true false + YN M 1 £ M 2 ² p M 1 £ M 2 2 p A £ M 1 ² p real error? M 2 ² A Learning with L* is ² M 1 £ : p ?
52
52 Learning Assumptions Let A_w For all environments E: ( M 1 || E satisfies p IFF E satisfies A w ) s L (A w ), in the context of s, M 1 ² p (A w ) = ( (M 1 ) (p)) (M 2 ) Conjectures are intermediate assumptions A i Framework may terminate before L* computes A w M2M2 M1M1 satisfies p? A £ M 1 ² p M 2 ² A ---------------- M 1 £ M 2 ² p
53
53 Answering Candidate Query We are given an assumption DFA A M 1 £ A ² ? Yes M 2 ² A Yes Done M 1 £ M 2 ² ? No + CE Return CE| as negative (strengthening) feedback to MAT No + CE’ M 1 £ {CE’} 2 ² ? YesNo + CE’’ Done: CE’’ is a Counterexample to M 1 £ M 2 ² ? Return CE’| as positive (weakening) feedback to MAT
54
54 Modified Candidate Query M 1 £ A ² ? Yes M 2 ² A Yes Done M 1 £ M 2 ² ? No + CE Return CE| as negative (strengthening) feedback to MAT No + CE’ M 1 £ {CE’} 2 ² ? YesNo + CE’’ Done: CE’’ is a Counterexample to M 1 £ M 2 ² ? Return CE’| as positive (weakening) feedback to MAT M 1 £ {CE’} ² ? YesNo = Update(CE’)
55
55 Transition constraints (p,x)^ !A( ) ) (q,x) (p,x) ^!A( ) ) (p,y) (q,x) ) (r,y) _(q,x) ^!A( ) ) (r,x) ^(q,y) (p,y) ) (q,z) _ (p,y) ^!A( ) ) (q,y) ^(p,z) (r,y) ^!A( ) ) (r,z) _ (r,x) ^!A( ) ) (r,y) (q,y) ^!A( ) ) (q,z) _ (q,y) ^!A( ) ) (r,y) (q,z) ^!A(b) ) (r,z) _ (p,z) ^!A( ) ) (q,z) Find solution that minimizes A( ) + A( ) In this case it must set A( ) = A( ) = TRUE Updated alphabet = { , }
56
56 L * Observation Table An observation table OT = (S, E, T) S µ *, E µ * S’ = {s ¢ | s 2 S ^ 2 } T : (S [ S’) £ E ! {0,1} — T(s,e) = 1 iff s ¢ e 2 U E state a S 00 a 01 S’ = S a 01 b 00 a b 01 a 00 Definition: for all s i,s j 2 S [ S’. s i ´ s j iff 8 e 2 E. T(s i,e) = T(s j,e) = {a,b} {a,ab,aba} 2 U
57
57 L * Observation Table An OT is consistent if 8s i,s j 2 S. :(s i ´ s j ) L * always maintains a consistent OT An OT is closed if 8s’ 2 S’, 9s 2 S. s ´ s’ Any consistent OT can be extended to a closed and consistent OT Requires adding rows to the table with membership queries and extending S with new states.[*fix table *] E state a S 01 a 10 S’ = S a 10 b 00 a b 11 a 00 E state a S 01 a 10 ab 11 b 00 S’ = S a 10 b 00 a b 11 a 00
58
58 Overall L * Procedure OT = (S,E,T) with S = E = {}; do forever { *Close OT; construct an assumption from OT; *make assumption query with A; if answer is “yes” exit with A; otherwise let CE be the counterexample; construct experiment e from CE; E = E [ {e}; } * - explanation follows
59
59 Closing the OT Input: Consistent OT = (S,E,T) Output: closed and consistent OT while (OT is not closed) let s’ 2 S’ be a state such that 8s 2 S. :(s ´ s’); S = S [ {s’}; S’ = S’n{s’} [ {s’. | 2 }; end
60
60 Candidate Construction OT = (S, E, T) is closed and consistent Candidate is a DFA A = (S, , , F) such that: 8s i,s j 2 S, 8 2 . (s i, , s j ) 2 iff s i. ´ s j F = {s 2 S | T(s, ) = 1} E state a S 01 a 10 S’ = S a 10 b 00 a b 11 a 00 [* finish automaton *]
61
61 Learning experiment from CE Let CE = 1, …, k for assumption query with A CE 2 symmetric difference of U and L(A) For i = 0 to k let CE i = prefix of CE of length i CE i = suffix of CE of length k – i s i = state of A reached by simulating CE i b i = 1 if (s i. CE i ) 2 U and 0 otherwise b 0 = 1 iff CE 2 U and b k = 1 iff CE 2 L(A) ) b 0 b k ) 9 j. b j b j+1 Find such a j. Return experiment e = CE j+1.
62
62 Learning Assumptions For simplicity, assume from hereon M 1 = M 1 ’ £ :p To answer membership query with t 2 * Let {t} = DFA over alphabet that only accepts string t Answer = Yes if M 1 £ {t} ² ? // i.e., t L(M 1 ) No otherwise
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.