© 2007 Carnegie Mellon University Optimized L*-based Assume-Guarantee Reasoning Sagar Chaki, Ofer Strichman March 27, 2007.

Slides:



Advertisements
Similar presentations
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Advertisements

Theory of Computing Lecture 23 MAS 714 Hartmut Klauck.
CS 267: Automated Verification Lecture 8: Automata Theoretic Model Checking Instructor: Tevfik Bultan.
Lecture 24 MAS 714 Hartmut Klauck
Verification of Evolving Software Natasha Sharygina Joint work with Sagar Chaki and Nishant Sinha Carnegie Mellon University.
Finite Automata CPSC 388 Ellen Walker Hiram College.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 2 Mälardalen University 2005.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
Closure Properties of CFL's
Summary Showing regular Showing non-regular construct DFA, NFA
1 Midterm I review Reading: Chapters Test Details In class, Wednesday, Feb. 25, :10pm-4pm Comprehensive Closed book, closed notes.
Automated assume-guarantee reasoning for component verification Dimitra Giannakopoulou (RIACS), Corina Păsăreanu (Kestrel) Automated Software Engineering.
© Janice Regan, CMPT 102, Sept CMPT 102 Introduction to Scientific Computer Programming The software development method algorithms.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Finite Automata Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture 20Oct 28, 2010Carnegie Mellon University.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture4: Regular Expressions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture3: Regular Expressions Prof. Amos Israeli.
Introduction to Computability Theory
Courtesy Costas Busch - RPI1 Non Deterministic Automata.
1 Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham.
Fall 2004COMP 3351 Recursively Enumerable and Recursive Languages.
1 Uncountable Sets continued Theorem: Let be an infinite countable set. The powerset of is uncountable.
Snick  snack CPSC 121: Models of Computation 2010 Winter Term 2 DFAs in Depth Benjamin Israel Notes heavily borrowed from Steve Wolfman’s,
Fall 2006Costas Busch - RPI1 Non-Deterministic Finite Automata.
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
FSA Lecture 1 Finite State Machines. Creating a Automaton  Given a language L over an alphabet , design a deterministic finite automaton (DFA) M such.
1 Non-Deterministic Automata Regular Expressions.
Introduction to Finite Automata Adapted from the slides of Stanford CS154.
Costas Busch - LSU1 Non-Deterministic Finite Automata.
Transparency No. 8-1 Formal Language and Automata Theory Chapter 8 DFA state minimization (lecture 13, 14)
Finite State Machines Data Structures and Algorithms for Information Processing 1.
Nondeterminism (Deterministic) FA required for every state q and every symbol  of the alphabet to have exactly one arrow out of q labeled . What happens.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Induction and recursion
Regular Model Checking Ahmed Bouajjani,Benget Jonsson, Marcus Nillson and Tayssir Touili Moran Ben Tulila
Decision Procedures An Algorithmic Point of View
Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou.
Lifecycle Verification of the NASA Ames K9 Rover Executive Dimitra Giannakopoulou Mike Lowry Corina Păsăreanu Rich Washington.
Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili.
REGULAR LANGUAGES.
1 Unit 1: Automata Theory and Formal Languages Readings 1, 2.2, 2.3.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Introduction to CS Theory Lecture 3 – Regular Languages Piotr Faliszewski
Automatic Assumption Generation for Compositional Verification Dimitra Giannakopoulou (RIACS), Corina Păsăreanu (Kestrel) Automated Software Engineering.
Dynamic Component Substitutability Analysis Edmund Clarke Natasha Sharygina* Nishant Sinha Carnegie Mellon University The University of Lugano.
Lexical Analysis Constructing a Scanner from Regular Expressions.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
Towards a Compositional SPIN Corina Păsăreanu QSS, NASA Ames Research Center Dimitra Giannakopoulou RIACS/USRA, NASA Ames Research Center.
Inferring Finite Automata from queries and counter-examples Eggert Jón Magnússon.
Learning Symbolic Interfaces of Software Components Zvonimir Rakamarić.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
SAT-Based Model Checking Without Unrolling Aaron R. Bradley.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2007.
Recursively Enumerable and Recursive Languages
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
Finite Automata Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture 20Mar 30, 2010Carnegie Mellon.
CHARME’03 Predicate abstraction with Minimum Predicates Sagar Chaki*, Ed Clarke*, Alex Groce*, Ofer Strichman** * Carnegie Mellon University ** Technion.
Compositional Verification for System-on-Chip Designs SRC Student Symposium Paper 16.5 Nishant Sinha Edmund Clarke Carnegie Mellon University.
Compositional Verification part II Dimitra Giannakopoulou and Corina Păsăreanu CMU / NASA Ames Research Center.
Verifying Component Substitutability Nishant Sinha Sagar Chaki Edmund Clarke Natasha Sharygina Carnegie Mellon University.
Regular Languages Chapter 1 Giorgi Japaridze Theory of Computability.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2006.
CS 154, Lecture 4: Limitations on DFAs (I),
Minimal DFA Among the many DFAs accepting the same regular language L, there is exactly one (up to renaming of states) which has the smallest possible.
Instructor: Aaron Roth
Instructor: Aaron Roth
Presentation transcript:

© 2007 Carnegie Mellon University Optimized L*-based Assume-Guarantee Reasoning Sagar Chaki, Ofer Strichman March 27, 2007

2 Motivation: Reasoning by Decomposition Let M 1 and M 2 be two NFAs Let p be a property expressed as an NFA Is L(M 1 £ M 2 ) µ L(p) ? (Our notation: M 1 £ M 2 ² p) Q: What if this is too hard to compute ? A: Decompose

3 Assume-Guarantee Reasoning An Assume-Guarantee rule: M 1 and M 2 are NFAs with alphabets  1 and  2 This rule is sound and complete For ¹ being trace containement, simulation etc. There always exists such an assumption A (e.g. M 2 ) Need to find A such that M 1 £ A is easier to compute than M 1 £ M 2 A £ M 1 ¹ p M 2 ¹ A M 1 £ M 2 ¹ p A £ (M 1 £ : p) ¹ ? M 2 ¹ A (M 1 £ : p) £ M 2 ¹ ? ≡

4 Learning the Assumption Q: How can we find such an assumption A ? A: Learn it with L * The L* algorithm is by Angluin [87] Later improved by Rivest & Schapire [93] – this is what we use.

5 Positive feedback L* TeacherUnknown regular Language U L* s 2 U ? L (A) = U ? DA A s.t. L ( A ) = U yes/no yes U) Membership query Candidate query No.  2 L (A) - U Negative feedback No.  2 U - L (A) L* finds the minimal A such that L(A) = U

6 L* So L* receives as input positive and negative feedbacks In each iteration, size of the automaton increases by at least one state. But does not necessarily eliminate the feedback in the current iteration. L* creates a sequence of DFAs A 1, A 2, …, until converging with A such that L(A) = U.

7 L* For all i |A i | < |A i+1 | Produces at most n candidates, where n = |A| A is minimal For any DFA B such that L(A) = L(B), |A| · |B|

8 Trying to distinguish between: L (M 1 £: p) L (M 2 ) M 1 £ M 2 ² p is the same as (M 1 £ : p) £ M 2 ² ? M 1 £ M 2 2 p L (M 1 £: p) L (M 2 ) M 1 £ M 2 ² p

9 On the way we can … L (M 1 £: p) L (M 2 ) A Find an assumption A such that L(M 2 )µ L(A)µ  * - L(M 1 £ :p) Our HOPE: A is ‘simpler’ to represent than M 2 i.e., | M 1 £ :p £ A| << |M 1 £ :p £ M 2 | A is an ‘acceptable’ assumption.  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

10 How ? Learn the language U =  * - L(M 1 £ :p) Well defined. We can construct a teacher for it. Membership query is answered by simulating  on M 1 £ :p. L (M 1 £: p) L (M 2 )  A = U A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ? 

11 How ? Learn the language U =  * - L(M 1 £ :p) Well defined. We can construct a teacher for it. A counterexample to M 2 ² A is a real one:  2 L(M 2 – A) !  2 L(M 1 £ :p £ M 2 ) L (M 1 £: p) L (M 2 )  A = U  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

12 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L (M 1 £: p) L (M 2 ) A  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ? Check  2 L(M 2 ) …

13 Check  2 L(M 2 ) … L* - when M 1 £ M 2 ² p L (M 1 £: p) L (M 2 )  A A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ? If yes,  is a real counterexample. Otherwise …

14 L* - when M 1 £ M 2 ² p L (M 1 £: p) L (M 2 )  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ? A L* receives a negative feedback:  should be removed from A A matter of luck!

15 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L* receives a positive feedback:  should be added to A Unless… L (M 1 £: p) L (M 2 ) A  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

16 L* - when M 1 £ M 2 ² p A conjecture query: is A acceptable ? L* receives a positive feedback:  should be added to A Unless… L (M 1 £: p) L (M 2 ) A  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

17 L* - when M 1 £ M 2 2 p A conjecture query: is A acceptable ? L* receives a positive feedback:  should be added to A Therefore, check:  2 L(M 1 £ :p). If yes this is a real counterexample. L (M 1 £: p) L (M 2 ) A  A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

18 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true Negative feedback  L* N M 1 £ M 2 ² p Positive feedback  N Y M 1 £ M 2 2 p  ² M 2 false,   ² M 1 £ : p Y

19 L* - membership queries # queries =  (kn 2 + n log m) m is the size of the largest counterexample, k is the size of the alphabet, n the number of states in A. Minimizing the number of membership queries is one of the subjects of this work.

20 This work In this work we improve the A-G framework with three optimizations 1. Feedback reuse: reduce the number of candidate queries 2. Lazy Learning: reduce the number of membership queries 3. Incremental Alphabet - reduce the size of A, the number of membership queries and conjectures. As a result: reduced overall verification time of component-based systems. We will talk in details about the third optimization only

21 Optimization 2: Lazy Learning Current method: 1. Learn A 2. Check if A £ (M 1 £ :p) ² ? (M 1 £ :p) is external to the learner. Learner interacts with (M 1 £ :p) only through membership-queries. A £ (M 1 £ : p) ² ? M 2 ² A (M 1 £ : p) £ M 2 ² ?

22 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove  from L (A) Learning with L* N M 1 £ M 2 ² p add  to L (A) N Y M 1 £ M 2 2 p  ² M 2 false,   ² M 1 £ : p Y

23 Optimization 2: Lazy Learning Our ‘Lazy’ method: learner uses information about (M 1 £ :p) to reduce the number of membership queries. In particular: do not consider transitions that cannot synchronize with (M 1 £ :p).

24 Optimization 2: Lazy Learning Claim: The sequence of assumptions is the same as in L*. Saving: dropping membership queries for which we can compute in advance their result.

25 Optimization 3: Incremental Alphabet Choosing  = (  1 [  p ) \  2 always works We call (  1 [  p ) \  2 the “full interface alphabet” But there may be a smaller  that also works We wish to find a small such  using iterative refinement Start with  = ; Is the current  adequate ? — no – update  and repeat — yes – continue as usual

26 Optimization 3: incremental alphabet Claim: removing letters from the global alphabet, over-approximates the product. Example: If  = {a,b}then ‘bb’  L (A £ B) If  = {b}then ‘bb’ 2 L (A £ B) a bb £ AB L (M) Decreased 

27 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove  from L (A) Learning with L* N M 1 £ M 2 ² p add  to L (A) N Y M 1 £ M 2 2 p  ² M 2 false,   ² M 1 £ : p Y

28 A-G with Learning Model Checking A £ M 1 ² p M 2 ² A A true remove  from L (A) Learning with L* (   )   = ; N M 1 £ M 2 ² p add  to L (A) N Y M 1 £ M 2 2 p  ² M 2 false,   ² M 1 £ : p Y

29 Optimization 3: Check if  ² M 1 £ :p We first check  with full alphabet  : A L (M 1 £: p) L (M 2 )  A L (M 1 £: p) L (M 2 )  A real counterexample!

30 Optimization 3: Check if  ² M 1 £ :p We first check  with full alphabet  : Then with a reduced alphabet  A : L (M 1 £: p) L (M 2 ) A  A L (M 1 £: p) L (M 2 )  Positive feedback Proceed as usual

31 Optimization 3: Check if  ² M 1 £ :p We first check  with full alphabet  : Then with a reduced alphabet  A : L (M 1 £: p) L (M 2 ) A  A L (M 1 £: p) L (M 2 )  No positive feedback  is spurious Must refine  A

32 There are various letters that we can add to  A in order to eliminate . But adding a letter for each spurious counterexample is wasteful. Better: find a small set of letters that eliminate all the spurious counterexamples seen so far. Optimization 3: Refinement

33 So we face the following problem: “Given a set of sets of letters, find the smallest set of letters that intersects all of them.” This is a minimum-hitting-set problem. Optimization 3: Refinement

34 A naïve solution: Find for each counterexample the set of letters that eliminate it. — Explicit traversal of M 1 £ :p. Formulate the problem: “find the smallest set of letters that intersects all these sets” — A 0-1 ILP problem. Optimization 3: Refinement

35 Alternative solution: integrate the two stages. Formulate the problem: “find the smallest set of letters that eliminate all these counterexamples” — a 0-1 ILP problem Optimization 3: Incremental Alphabet

36 Let M 1 £ : p = Let  = Introduce a variable for each state-pair: (p,x),(p,y),… Introduce choice variables A(  ) and A(  ) Initial constraint: (p,x) initial state always reachable Final constraint: :(r,z) final states must be unreachable pq  r  xy  z  Optimization 3: Incremental Alphabet

37 Let M 1 £ : p = Let  = Some sample transitions: (p,x) ^ :A(  ) ) (q,x) (p,x) ^ :A(  ) ) (p,y) (q,x) ) (r,y) _(q,x) ^ :A(  ) ) (r,x) ^(q,y) Find a solution that minimizes A(  ) + A(  ) In this case setting A(  ) = A(  ) = TRUE Updated alphabet  = { ,  } pq  r  xy  z  Optimization 3: Incremental Alphabet

38 Experimental Results: Overall Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T Avg

39 Experimental Results: Optimization 3 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T Avg

40 Experimental Results: Optimization 2 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T Avg

41 Experimental Results: Optimization 1 Name Candidate Queries Membership Queries |||| (Time) : T 1 (Time) T 1 : T 2 T2T2 T2T2 : T 1 T1T1 : T 2 T2T2 : T 3 T3T3 T3T3 T3T3 T3T3 T3T Avg

42 Related Work NASA – original work – Cobleigh, Giannakopoulou, Pasareanu et al. Applications to simulation & deadlock Symbolic approach – Alur et al. Heuristic approach to optimization 3 – Gheorghiu

43 Some usage examples The general method described here was initiated by NASA to verify some safety-critical code. Some of their slides are recycled here… Our examples: OpenSSL – check the order of messages in the handshake step of the protocol. Linux driver – verify that acquire/release of locks is done in an order that prevents deadlocks. … looking eagerly for more

44 Ames Rover Executive Executes flexible plans for autonomy branching on state / temporal conditions Multi-threaded system communication through shared variables synchronization through mutexes and condition variables Several synchronization issues mutual exclusion data races properties specified by developer

45 Remove  send,ack  Membership queries: is ack 2 L (M 1 £ : p) ? is send 2 L (M 1 £ : p) ? is out 2 L (M 1 £ : p) ? is send,send 2 L (M 1 £ : p) ? is send,out 2 L (M 1 £ : p) ? # membership queries = O(#states ¢ || ¢ #experiments) ack send A1:A1: ack send out, send A2:A2: Membership Queries Example

46 ack send A1:A1: Remove  send,ack  ack send out, send A2:A2: Membership Queries Remove c=  send,out  Add c=  send,send,out  send ack,send send ack out A3:A3: ack,send ack send out ack send A4:A4: out

47 Learning with L* What happens in L* once a positive/negative trace are found ? L* finds a suffix of the trace that distinguishes A i from A from one of A i ’s states. This forces a split of the state. Need to re-compute all transitions. L* initiates experiments. The process continues until consistency with the experiments is achieved: this is A i+1.

48 L* - membership queries L* initiate its own set of experiments in order to make the assumption consistent with the experiments. For this, L* needs to be able to answer, for a given string s, “is s 2 L (M 1 £:p) ?” (-- why the asymmetry ?) This is called a membership query.

49 Optimization 2: Lazy Learning Suppose that L* computes the transitions from a new state s A. This state corresponds to a set of states in M 1, denoted by M 1 (s A ). Let  1 (s A ) be the set of transitions (= letters ) allowed from M 1 (s A ). Extend s A only with letters from  1 (s A ) Rather than from . Other letters go into an accepting sink-state.

50 L* - learning a regular language TeacherUnknown regular Language U L* Minimal FDA A s.t. L ( A ) = U s 2 U ? yes/no L = U ? yes/counterexample U) Our case is more problematic: U is not fixed.

51 counterexample – remove  from L (A) counterexample – add  to L (A) false +  Learning with L* L (A) = ; Overall A-G with Learning Model Checking A £ M 1 ² p real error? M 2 ² A A true false +  YN M 1 £ M 2 ² p M 1 £ M 2 2 p A £ M 1 ² p real error? M 2 ² A Learning with L* is  ² M 1 £ : p ?

52 Learning Assumptions Let A_w For all environments E: ( M 1 || E satisfies p IFF E satisfies A w ) s  L (A w ), in the context of s, M 1 ² p  (A w ) = (  (M 1 )   (p))   (M 2 ) Conjectures are intermediate assumptions A i Framework may terminate before L* computes A w M2M2 M1M1 satisfies p? A £ M 1 ² p M 2 ² A M 1 £ M 2 ² p

53 Answering Candidate Query We are given an assumption DFA A M 1 £ A ² ? Yes M 2 ² A Yes Done M 1 £ M 2 ² ? No + CE Return CE|  as negative (strengthening) feedback to MAT No + CE’ M 1 £ {CE’}  2 ² ? YesNo + CE’’ Done: CE’’ is a Counterexample to M 1 £ M 2 ² ? Return CE’|  as positive (weakening) feedback to MAT

54 Modified Candidate Query M 1 £ A ² ? Yes M 2 ² A Yes Done M 1 £ M 2 ² ? No + CE Return CE|  as negative (strengthening) feedback to MAT No + CE’ M 1 £ {CE’}  2 ² ? YesNo + CE’’ Done: CE’’ is a Counterexample to M 1 £ M 2 ² ? Return CE’|  as positive (weakening) feedback to MAT M 1 £ {CE’}  ² ? YesNo = Update(CE’)

55 Transition constraints (p,x)^ !A(  ) ) (q,x) (p,x) ^!A(  ) ) (p,y) (q,x) ) (r,y) _(q,x) ^!A(  ) ) (r,x) ^(q,y) (p,y) ) (q,z) _  (p,y) ^!A(  ) ) (q,y) ^(p,z) (r,y) ^!A(  ) ) (r,z) _  (r,x) ^!A(  ) ) (r,y) (q,y) ^!A(  ) ) (q,z) _  (q,y) ^!A(  ) ) (r,y) (q,z) ^!A(b) ) (r,z) _  (p,z) ^!A(  ) ) (q,z) Find solution that minimizes A(  ) + A(  ) In this case it must set A(  ) = A(  ) = TRUE Updated alphabet  = { ,  }

56 L * Observation Table An observation table OT = (S, E, T) S µ  *, E µ  * S’ = {s ¢  | s 2 S ^  2  } T : (S [ S’) £ E ! {0,1} — T(s,e) = 1 iff s ¢ e 2 U E state  a S  00 a 01 S’ = S   a 01 b 00 a b 01 a 00 Definition: for all s i,s j 2 S [ S’. s i ´ s j iff 8 e 2 E. T(s i,e) = T(s j,e) = {a,b} {a,ab,aba} 2 U

57 L * Observation Table An OT is consistent if 8s i,s j 2 S. :(s i ´ s j ) L * always maintains a consistent OT An OT is closed if 8s’ 2 S’, 9s 2 S. s ´ s’ Any consistent OT can be extended to a closed and consistent OT Requires adding rows to the table with membership queries and extending S with new states.[*fix table *] E state  a S  01 a 10 S’ = S   a 10 b 00 a b 11 a 00 E state  a S  01 a 10 ab 11 b 00 S’ = S   a 10 b 00 a b 11 a 00

58 Overall L * Procedure OT = (S,E,T) with S = E = {}; do forever { *Close OT; construct an assumption from OT; *make assumption query with A; if answer is “yes” exit with A; otherwise let CE be the counterexample; construct experiment e from CE; E = E [ {e}; } * - explanation follows

59 Closing the OT Input: Consistent OT = (S,E,T) Output: closed and consistent OT while (OT is not closed) let s’ 2 S’ be a state such that 8s 2 S. :(s ´ s’); S = S [ {s’}; S’ = S’n{s’} [ {s’.  |  2  }; end

60 Candidate Construction OT = (S, E, T) is closed and consistent Candidate is a DFA A = (S, , ,  F) such that: 8s i,s j 2 S, 8  2 . (s i, , s j ) 2  iff s i.  ´ s j F = {s 2 S | T(s,  ) = 1} E state  a S  01 a 10 S’ = S   a 10 b 00 a b 11 a 00 [* finish automaton *]

61 Learning experiment from CE Let CE =  1, …,  k for assumption query with A CE 2 symmetric difference of U and L(A) For i = 0 to k let CE i = prefix of CE of length i CE i = suffix of CE of length k – i s i = state of A reached by simulating CE i b i = 1 if (s i. CE i ) 2 U and 0 otherwise b 0 = 1 iff CE 2 U and b k = 1 iff CE 2 L(A) ) b 0  b k ) 9 j. b j  b j+1 Find such a j. Return experiment e = CE j+1.

62 Learning Assumptions For simplicity, assume from hereon M 1 = M 1 ’ £ :p To answer membership query with t 2  * Let {t}  = DFA over alphabet  that only accepts string t Answer = Yes if M 1 £ {t}  ² ? // i.e., t  L(M 1 ) No otherwise