1 Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham.

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Theory of Computing Lecture 23 MAS 714 Hartmut Klauck.
CS 267: Automated Verification Lecture 8: Automata Theoretic Model Checking Instructor: Tevfik Bultan.
Translation-Based Compositional Reasoning for Software Systems Fei Xie and James C. Browne Robert P. Kurshan Cadence Design Systems.
Lecture 24 MAS 714 Hartmut Klauck
Verification of Evolving Software Natasha Sharygina Joint work with Sagar Chaki and Nishant Sinha Carnegie Mellon University.
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
Finite Automata CPSC 388 Ellen Walker Hiram College.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
Summary Showing regular Showing non-regular construct DFA, NFA
Rigorous Software Development CSCI-GA Instructor: Thomas Wies Spring 2012 Lecture 13.
CS 267: Automated Verification Lecture 10: Nested Depth First Search, Counter- Example Generation Revisited, Bit-State Hashing, On-The-Fly Model Checking.
1 Temporal Claims A temporal claim is defined in Promela by the syntax: never { … body … } never is a keyword, like proctype. The body is the same as for.
Automated assume-guarantee reasoning for component verification Dimitra Giannakopoulou (RIACS), Corina Păsăreanu (Kestrel) Automated Software Engineering.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Finite Automata Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture 20Oct 28, 2010Carnegie Mellon University.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture3: Regular Expressions Prof. Amos Israeli.
Introduction to Computability Theory
1 Model Checking, Abstraction- Refinement, and Their Implementation Based on slides by: Orna Grumberg Presented by: Yael Meller June 2008.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata (cont.) Prof. Amos Israeli.
Penn ESE 535 Spring DeHon 1 ESE535: Electronic Design Automation Day 13: March 4, 2009 FSM Equivalence Checking.
Validating Streaming XML Documents Luc Segoufin & Victor Vianu Presented by Harel Paz.
CS5371 Theory of Computation Lecture 6: Automata Theory IV (Regular Expression = NFA = DFA)
© 2007 Carnegie Mellon University Optimized L*-based Assume-Guarantee Reasoning Sagar Chaki, Ofer Strichman March 27, 2007.
Automating Construction of Lexers. Example in javacc TOKEN: { ( | | "_")* > | ( )* > | } SKIP: { " " | "\n" | "\t" } --> get automatically generated code.
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Regular Model Checking Ahmed Bouajjani,Benget Jonsson, Marcus Nillson and Tayssir Touili Moran Ben Tulila
Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou.
Nondeterministic Finite Automata CS 130: Theory of Computation HMU textbook, Chapter 2 (Sec 2.3 & 2.5)
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions.
Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili.
REGULAR LANGUAGES.
1 Unit 1: Automata Theory and Formal Languages Readings 1, 2.2, 2.3.
Automatic Assumption Generation for Compositional Verification Dimitra Giannakopoulou (RIACS), Corina Păsăreanu (Kestrel) Automated Software Engineering.
Modelling III: Asynchronous Shared Memory Model Chapter 9 by Nancy A. Lynch presented by Mark E. Miyashita.
Dynamic Component Substitutability Analysis Edmund Clarke Natasha Sharygina* Nishant Sinha Carnegie Mellon University The University of Lugano.
Lexical Analysis Constructing a Scanner from Regular Expressions.
4b 4b Lexical analysis Finite Automata. Finite Automata (FA) FA also called Finite State Machine (FSM) –Abstract model of a computing entity. –Decides.
Decidable Questions About Regular languages 1)Membership problem: “Given a specification of known type and a string w, is w in the language specified?”
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
Regular Expressions Chapter 6 1. Regular Languages Regular Language Regular Expression Finite State Machine L Accepts 2.
Towards a Compositional SPIN Corina Păsăreanu QSS, NASA Ames Research Center Dimitra Giannakopoulou RIACS/USRA, NASA Ames Research Center.
Inferring Finite Automata from queries and counter-examples Eggert Jón Magnússon.
Copyright © Curt Hill Finite State Automata Again This Time No Output.
Learning Symbolic Interfaces of Software Components Zvonimir Rakamarić.
CSCI 2670 Introduction to Theory of Computing September 13.
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
Compositional Verification for System-on-Chip Designs SRC Student Symposium Paper 16.5 Nishant Sinha Edmund Clarke Carnegie Mellon University.
Compositional Verification part II Dimitra Giannakopoulou and Corina Păsăreanu CMU / NASA Ames Research Center.
Verifying Component Substitutability Nishant Sinha Sagar Chaki Edmund Clarke Natasha Sharygina Carnegie Mellon University.
LECTURE 5 Scanning. SYNTAX ANALYSIS We know from our previous lectures that the process of verifying the syntax of the program is performed in two stages:
Regular Languages Chapter 1 Giorgi Japaridze Theory of Computability.
Complexity and Computability Theory I Lecture #5 Rina Zviel-Girshin Leah Epstein Winter
Algorithms and Decision Procedures for Regular Languages Chapter 9.
 2005 SDU Lecture11 Decidability.  2005 SDU 2 Topics Discuss the power of algorithms to solve problems. Demonstrate that some problems can be solved.
Copyright © Cengage Learning. All rights reserved.
High-Level Abstraction of Concurrent Finite Automata
Nondeterministic Finite Automata
Minimal DFA Among the many DFAs accepting the same regular language L, there is exactly one (up to renaming of states) which has the smallest possible.
Introduction to Finite Automata
Instructor: Aaron Roth
Instructor: Aaron Roth
Lecture 5 Scanning.
Presentation transcript:

1 Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham

2 Main Limitation of Model Checking: The state explosion problem The number of states in the system model grows exponentially with  the number of variables  the number of components in the system One solution to state explosion problem: Abstraction Another: Compositional Verification

3 Compositional Verification Inputs: –composite system M 1 ║ M 2 –property P Goal: check if M 1 ║ M 2 ² P First attempt: “divide and conquer” approach Problem: usually impossible to verify each component separately. –a component is typically designed to satisfy its requirements in specific environments (contexts)

4 Compositional Verification  Assume-Guarantee (AG) paradigm: introduces assumptions representing a component’s environment Instead of: Does component satisfy property? Ask: Under assumption A on its environment, does the component guarantee the property? M : whenever M is part of a system satisfying the assumption A, then the system must also guarantee P

5 Useful AG Rule for Safety Properties 1.check if a component M 1 guarantees P when it is a part of a system satisfying assumption A. 2.discharge assumption: show that the remaining component M 2 (the environment) satisfies A. M 1 M 2 M 1 ║ M 2 M1M1 ║ M2M2 ² P A

6 Assume-Guarantee Crucial element: assumption A. Has to be strong enough to eliminate violations of P, but also general enough to reflect the environment M 2 appropriately. requires non-trivial human input in defining assumptions. How to automatically construct assumptions ? M 1 M 2 M 1 ║ M 2

7 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example 

8 Labeled Transition Systems (LTS) Act – set of observable actions –LTSs communicate using observable actions τ – local \ internal action LTS M = (Q, q 0, αM, δ ) Q : finite non-empty set of states q 0 ∈ Q : initial state αM µ Act : observable actions δ  µ  Q x (αM [ {τ }) x Q : transition relation Alphabet

9 Labeled Transition Systems (LTS) in 012 send ack Trace of an LTS M : finite sequence of observable actions that M can perform starting at the initial state. L(M) = the Language of M : the set of all traces of M. Traces:,,…

10 Parallel Composition M 1 ║ M 2 Components synchronize on common observable actions (communication). The remaining actions are interleaved. M 1 = (Q 1,q 0 1,αM 1, δ 1 ), M 2 = (Q 2,q 0 2,αM 2, δ 2 )  M 1 ║ M 2 = (Q,q 0,αM, δ ) Q = Q 1 x Q 2 q 0 = (q 0 1, q 0 2 ) αM = αM 1 [ αM 2

11 Transition Relation δ Synchronization on a ∈ αM 1 Å αM 2 : (q 1,a,q 1 ’) ∈ δ 1 and (q 2,a,q 2 ’) ∈ δ 2 :  ((q 1,q 2 ), a, (q 1 ’,q 2 ’)) ∈ δ Interleaving on a ∈ (αM \ (αM 1 Å αM 2 )) [ {τ} : (q 1, a, q 1 ’) ∈ δ 1 and a ∉ αM 2 :  ((q 1,q 2 ), a, (q 1 ’,q 2 )) ∈ δ  for any q 2 ∈ Q 2 (q 2, a, q 2 ’) ∈ δ 2 and a ∉ αM 1 :  ((q 1,q 2 ), a, (q 1,q 2 ’)) ∈ δ  for any q 1 ∈ Q 1

12 Example Input in 012 send ack Output out abc send ack ║

13 Input in 012 send ack Output out abc send ack ║ Input ║ Output in 0, a send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out 0 a 0, a 1 1, a 2,b b 2 c 2, c

14 Input in 012 send ack Output out abc send ack ║ Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out

15 Safety Properties Also expressed as LTSs, but of a special kind: Safety LTS : Deterministic: –Does not contain τ -transitions, and –every state has at most one outgoing transition for each action: (q,a,q’), (q,a,q’’) ∈ δ   q’ = q’’

16 Safety Properties For a safety LTS, P: L(P) describes the set of legal (acceptable) behaviors over αP. M ² P iff ∀ σ ∈ L(M) : (σ  αP) ∈ L(P) For Σ µ Act, σ  Σ is the trace obtained from σ by removing all occurrences of actions a ∉ Σ. Example:  {in, ack} = Language Containment L(M)  αP µ L(P)

17 Example Order in out Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out |= ? Pref( *)  ?  { in,out }  P

18 Model Checking M ² P Safety LTS P  an Error LTS, P err : “traps” violations with special error state . P = (Q, q 0, αP, δ )  P err = (Q [ {  }, q 0, αP, δ ’ ), where: δ ’  = δ  [ {(q,a,  ) | a ∈ αP and ∄ q’ ∈ Q: (q,a,q’) ∈ δ  } Error LTS is complete.  is a deadend state: has no outgoing transitions.

19 Example Order err in out Order in out

20 Model Checking M ² P Theorem: M ² P iff  is unreachable in M ║ P err Remark: composition M ║ P err is defined as before with a small exception: If the target state of a transition in P err is , so is the target state of the corresponding transitions in the composed system M ║ P err In automata: M ²  iff L(A M Å A  ) = ;

21 Transition Relation δ Synchronization on a ∈ αM 1 Å αM 2 : (q 1,a,q 1 ’) ∈ δ 1 and (q 2,a,q 2 ’) ∈ δ 2 :  ((q 1,q 2 ), a, (q 1 ’,q 2 ’)) ∈ δ Interleaving on a ∈ (αM \ (αM 1 Å αM 2 )) [ {τ} : (q 1, a, q 1 ’) ∈ δ 1 and a ∉ αM 2 :  ((q 1,q 2 ), a, (q 1 ’,q 2 )) ∈ δ  for any q 2 ∈ Q 2 (q 2, a, q 2 ’) ∈ δ 2 and a ∉ αM 1 :  ((q 1,q 2 ), a, (q 1,q 2 ’)) ∈ δ  for any q 1 ∈ Q 1 π π π π δ 1 : M, δ 2 : P err δ: M ║ P err

22 Example Order err in out Order in out ║ Input ║ Output 0, a in send ack 0, b 0, c 1, a 2, a 1, b 2, b 1, c 2, c out in out

23 Example Order err in out Order in out ║ Input ║ Output 0, a in send ack 1, a 2, b 2, c out

24 ║ Input ║ Output 0, a insend ack 1, a 2, b 2, c out iii Order err in out Order in out 0,a,i in send ack 1,a,i2,b,i2,c,i out 0,a,i i in send ack 1,a,i i 2,b,i i 2,c,i i out  unreachable 0,a,i2,c,i 1,a,i i 2,b,ii 0, a 1, a ii 0,a,i i

25 in ║ Input ║ Output 0, a insend ack 1, a 2, b 2, c out iii Order err in out Order in out 0,a,i in send ack 1,a,i2,b,i2,c,i out 0,a,i i in send ack 1,a,i i 2,b,i i 2,c,i i out in counter- example ! 0,a,i 1,a,i i

26 Assume Guarantee Reasoning Assumptions: also expressed as safety LTSs. M is true iff A ║ M ² P i.e.  is unreachable in A ║ M ║ P err M 1 M 2 M 1 ║ M 2 A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P M 1, M 2 : LTSs P, A : safety LTSs

27 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example  

28 Observation 1 AG Rule will return conclusive results with: Weakest assumption A w under which M 1 satisfies P: Under assumption A w on M 1 ‘s environment, M 1 satisfies P, i.e. A w ║ M 1 ² P Weakest: forall env. M’ 2 : M 1 ║ M’ 2 ² P IFF M’ 2 ² A w  Given A w, to check if M 1 ║ M 2 ² P, check if M 2 meets the assumption A w. sufficient and necessary assumption on M 1 ’s env. A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P

29 How to Obtain A w ? Given module M 1, property P and M 1 ‘s interface with its environment Σ = (αM 1 [ αP) Å αM 2 : A w describes exactly all traces  over Σ such that in the context of , M 1 satisfies P –Expressible as a safety LTS –Can be computed algorithmically Drawback of using A w : if the computation runs out of memory, no assumption is obtained. M 1 ║ M’ 2 ² P IFF M’ 2 ² A w

30 Observation 2 No need to use the weakest env. assumption A w. AG rule might be applicable with stronger (less general) assumption.  Instead of finding A w : Use learning algorithm to learn A w. Use candidates A i produced by learning algorithm as candidate assumptions: try to apply AG rule with A i. A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P

31 Given a Candidate Assumption A i If A i ║ M 1 ² P does not hold: Assumption A i is not tight enough  need to strengthen A i A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P

32 Given a Candidate Assumption A i Suppose A i ║ M 1 ² P holds: If M 2 ² A i also holds  M 1 ║ M 2 ² P verified!. Otherwise: σ ∈ L(M 2 ) but (σ  αΣ) ∉ L(A i ) -- cex P violated by M 1 in the context of cex=(σ  αΣ) ? Yes: real violation !  M 1 ║ M 2 ² P falsified ! No: spurious violation.  need to find better approximation of A w. “cex ║ M 1 ” | ≠ P ? A ║ M 1 ² P M 2 ² A M 1 ║ M 2 ² P

33 “cex ║ M 1 ” ² P ? Compose M 1 with LTS A cex over αΣ cex=a 1,…a k  A cex : Q={q 0,…,q k }, q 0 = q 0 δ ={ (q i,a i+1,q i+1 ) | 0  i < k } Model check A cex ║ M 1 ² P Is  reachable in A cex ║ M 1 ║ P err ? Alternatively: simulate cex on M 1 ║ P err cex ║ M 1 ² P iff when cex is simulated on M 1 ║ P err it cannot lead to  (error) state. yes no Real violation Spurious cex q0q0 a1 a1 a2a2 q1q1 qkqk ak ak … “cex ∊ L(A w )” ?

34 Assume-Guarantee Framework Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true false YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 Learning 1. A i ║ M 1 |= P real error? 2. M 2 |= A i cex ║ M 1 |  P ? cex  L(A i ) For A w : conclusive results guaranteed  termination!

35 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example   

36 Learning Algorithm for DFA – L* L*: by Angluin, improved by Rivest & Schapire learns an unknown regular language U produces a Deterministic Finite state Automaton (DFA) C such that L(C) = U DFA M = (Q, q 0, αM, δ, F) : Q, q 0, αM, δ  : as in deterministic LTS F µ Q : accepting states L(M) = {σ | δ (q 0, σ) ∈ F}

37 Learning Algorithm for DFA – L* L* interacts with a Teacher to answer two types of questions: –Membership queries: is string σ in U ? –Conjectures: for a candidate DFA C i, is L(C i ) = U ? answers are (true) or (false + counterexample) Conjectures C 1, C 2, … converge to C Equivalenc e Queries

38 Learning Algorithm for DFA – L* Myhill-Nerode Thm. For every regular set U ⊆∑ * there exists a unique minimal deterministic automaton whose states are isomorphic to the set of equivalence classes of the following relation: w ≈ w’ iff ∀ u ∈∑ * : wu ∈ U ⇔ w’u ∈ U Basic idea: learn the equivalence classes Two prefixes are not in the same class iff there is a distinguishing suffix u

39 General Method L* maintains a table that records whether strings in   belong to U –makes membership queries to update it Once all currently distinguishable strings are in different classes (table is closed), uses table to build a candidate C i -- makes a conjecture : –if Teacher replies true, done! –if Teacher replies false, uses counterexample to update the table

40 Learning Algorithm for DFA – L* Observation Table: (S, E, T) S 2   – prefixes (represent. of equiv. classes / states ) E 2   – suffixes (distinguishing) T: mapping (S ∪ S·∑) · E  {true, false} ∈U∈U ∉U∉U E · e 1 e 2 … s1s1 true false S s 2... false …... S  s1·as1·atrue false s1·b…s1·b… true … … s2·as2·a s2·b…s2·b… T :

41 T: (S ∪ S·∑) ·E  {true, false} (S,E,T) is closed if ∀ s ∈ S ∀ a ∈ ∑ ∃ s’ ∈ S ∀ e ∈ E: T(sae)=T(s’e ) ∈U∈U ∉U∉U represents the next state from s after seeing a sa is undistinguishable from s’ by any of the suffixes L* Algorithm i.e. Every row sa of S  has a matching row s’ inS E · e 1 e 2 … s1s1 true false S s 2... false …... S  s1·as1·atrue false s1·b…s1·b… true … … s2·as2·a s2·b…s2·b…

42 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } sa has no matching row s’ inS

43 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } Candidate DFA from a closed (S,E,T): States: S Initial state: λ Accepting states: s ∈ S such that T(s) [= T(s ¢ λ)] = true Transition relation: δ(s,a) = s’ where ∀ e ∈ E: T(sae)=T(s’e) [ closed: ∀ s ∈ S ∀ a ∈ ∑ ∃ s’ ∈ S ∀ e ∈ E: T(sae)=T(s’e) ]

44 Learning Algorithm for DFA – L* Initially: S={ λ }, E= { λ } Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? if C is correct return C else add e ∈ ∑* that witnesses the cex to E }

45 Choosing e e must be such that adding it to E will eliminate the current cex in the next conjecture. e is a suffix of cex that witnesses a difference between L(C i ) and U Adding e to E will cause an addition of a prefix (state) to S  splitting states

46 Characteristics of L* Terminates with minimal automaton C for U Each candidate C i is smallest –any DFA consistent with table has at least as many states as C i –|C 1 | < | C 2 | < … < |C| Produces at most n candidates, where n = |C| # membership queries: O(kn 2 + n logm) –m: size of largest counterexample –k: size of ∑

47 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example    

48 Assume-Guarantee with Learning Reminder: Use learning algorithm to learn A w. Use candidates produced by learning as candidate assumptions A i for AG rule. In order to use L* to produce assumptions A i : Show that L(A w ) is regular Translate DFA to safety LTS (assumption) Implement the teacher

49 L(A w ) is Regular A w is expressible as a safety LTS Translation of safety LTS A into DFA C: A = (Q, q 0, αM,  )  C = (Q, q 0, αM, , Q)  There exists a DFA C s.t. L(C) = L(A w )  L* can be used to learn a DFA that accepts L(A w )

50 DFA to Safety LTS DFAs C i returned by L* are complete, minimal and in this setting also prefix-closed: –if  2 L(C i ), then every prefix of  is also in L(C i ).  C i contains a single non-accepting state q nf and no accepting state is reachable from q nf.  To get a safety LTS simply remove non- accepting state and all its ingoing transitions : C i = (Q [ {q nf }, q 0, αM, , Q)  A i = (Q, q 0, αM,  Å (Q £ αM £ Q) ) Weakest assumption is prefix-closed

51 Implementing the Teacher A w is unknown… Membership query:  ∈ L(A w )?  Check if “  ║ M 1 ” ⊨ P : –Model checking: is  reachable in A  ║ M 1 ║ P err ? –Simulation: is  (error) state reachable when simulating  on M 1 ║ P err ? Equivalence query: L(C i ) = L(A w ) ? –Translate C i into a safety LTS A i –Use it as candidate assumption for AG rule   L(A w ) iff in context of , M 1 satisfies P or

52 P holds in M 1 ||M 2 Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 false cex ║ M 1 |  P ? cex  L(A i ) Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P violated in M 1 ||M 2 L(A i ) = L(A w ) ? false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P.

53 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 1. A i ║ M 1 |= P L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w )? cex  L(A i ) cex  L(A w ) false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P.

54 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P.

55 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P.

56 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P. L(A i ) ⊇ L(A w )?

57 Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 real error? 2. M 2 |= A i L(A i ) = L(A w ) ? L(A i ) ⊆ L(A w ) false cex ║ M 1 |  P ? cex  L(A i ) A w contains all traces that can be composed with M 1 without violating P. cex  L(A i ) cex  L(A w ) L(A i ) ⊇ L(A w )?

58 Equivalence Query - Summary 2 applications of model checking + 1 model checking or simulation Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i true YN false ! ! cex

59 Equivalence Query - Summary If L(A i ) = L(A w ): AG rule returns conclusive results  No need to continue with L* If L(A i ) ≠ L(A w ): Can terminate and abort L*. Otherwise, cex is returned in 1 if L(A i ) ⊈ L(A w ), or 2 if L(A i ) ⊊ L(A w ) Model Checking 1. A i ║ M 1 |= P real error? 2. M 2 |= A i true YN false ! ! cex

60 Assume-Guarantee Framework Model Checking 1. A i ║ M 1 |= P Learning real error? 2. M 2 |= A i AiAi counterexample – strengthen assumption counterexample – weaken assumption false true false YN P holds in M 1 ||M 2 P violated in M 1 ||M 2 Learning 1. A i ║ M 1 |= P real error? 2. M 2 |= A i cex ║ M 1 |  P ? cex  L(A i )

61 Characteristics of Framework AG uses conjectures produced by L* as candidate assumptions A i L* uses AG as teacher L* terminates  Framework guaranteed to terminate: –At latest terminates when A w is produced. –Possibly terminates before A w is produced!

62 Characteristics of Framework L* makes at most n = |A w | equivalence queries # membership queries: O(kn 2 + n logm) –m: size of largest counterexample –k: size of ∑ membership query  Model checking X 1 equivalence query  Model checking X 3  O(kn 2 + n logm) simulations + O(n) model checking applications. Each MC involves either M 1 or M 2 - not both! or simulation

63 Outline Motivation Setting Automatic Generation of Assumptions for the AG Rule Learning algorithm Assume-Guarantee with Learning Example     

64 Output send ack out Order err in out in Example Input in ack send Check: Input ║ Output ² Order ? M1M1 M2M2 P   (assumption’s alphabet) : { send, out, ack } Σ = (αM 1 [ αP) Å αM 2

65 Order err in out in Output send ack out Membership Queries Input in ack send E Table T S ? S  ack out send S = set of prefixes E = set of suffixes  L (A w )? Simulate  on Input ║ Order err Σ = { send, out, ack } out

66 Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send Order err in out in Input in ack send in ack out Simulate  on Input ║ Order err Σ = { send, out, ack } 0, a 1, b   unreachable   L (A w ) Output send ack out

67 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes  L (A w )? Simulate  on Input ║ Order err yes! Σ = { send, out, ack } E Table T S ? S  ack out send true

68 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S  ack ? out send true  L (A w )? yes! Simulate  on Input ║ Order err true

69 true Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S  ack out ? send true Simulate  on Input ║ Order err  L (A w )?

70 Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send Order err in out in Input in ack send in ack out Simulate  on Input ║ Order err Σ = { send, out, ack }   reachable   ∉ L (A w ) 0, a Output send ack out

71 Order err in out in Output send ack out Input in ack send Membership Queries Simulate  on Input ║ Order err  L (A w )? S = set of prefixes E = set of suffixes Σ = { send, out, ack } no! E Table T S ? S  ack true out ? send true false

72 Order err in out in Output send ack out Input in ack send Membership Queries S = set of prefixes E = set of suffixes Σ = { send, out, ack } E Table T S ? S  ack true outfalse sendtrue

73 Order err in out in Output send ack out Candidate Construction Input in ack send S = set of prefixes E = set of suffixes E Table T S true outfalse S  ack out send out, ack out, out out, send true false true Σ = { send, out, ack }

74 Order err in out in Output send ack out Candidate Construction Input in ack send S = set of prefixes E = set of suffixes E Table T S true outfalse S  ack out send out, ack out, out out, send true false true false Σ = { send, out, ack }

75 Order err in out in Output send ack out Candidate Construction Input in ack send ack send 2 states – non accepting state not added to assumption Assumption A 1 S = set of prefixes E = set of suffixes E Table T S true outfalse S  ack out send out, ack out, out out, send true false true false Table closed ! Σ = { send, out, ack }

76 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input ² Order ack send Σ = { send, out, ack } Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out A 1 ║Input ≡ Input A 1 ║Input ║Order err 0, a 1, b 0, b 2, b

77 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input |= Order Counterexample: c =  in,send,ack,in  ack send Σ = { send, out, ack } Return to L*: c  =  send,ack   L(A 1 )\L(A w ) 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out A 1 ║Input ║Order err 0, a 1, b 0, b 2, b

78 Order err in out in Output send ack out Membership Queries Input in ack send S = set of prefixes E = set of suffixes Σ = { send, out, ack } out E ' Table T ack S true outfalse S  ack out send out, ack out, out out, send true false true false Return to L*: c  =  send,ack   L(A 1 )\L(A w )

79 Order err in out in Output send ack out Membership Queries Input in ack send S = set of prefixes E = set of suffixes Σ = { send, out, ack } out E ' Table T ack S true outfalse S  ack out send out, ack out, out out, send true false true false true false true

80 Order err in out in Output send ack out Input in ack send Σ = { send, out, ack } out Table T ack S true out false send true S  ack out send out, ack out, out out,send send,ack send,out send,send true false true false true false true E ' false true

81 3 states – non accepting state not added to assumption Assumption A 2 Table closed ! Σ = { send, out, ack } Order err in out in Output send ack out Input in ack send out E ' Table T ack S true out false send true S  ack out send out, ack out, out out,send send,ack send,out send,send true false true false true false true false ack send out, send

82 Order err in out in Output send ack out Conjectures Input in ack send A1:A1: Step 1: A 1 ║Input ² Order Counterexample: c =  in,send,ack,in  Return to L*: c  =  send,ack  Step 1: A 2 ║Input ² Order True Step 2: Output ² A 2 True property Order holds on Input║Output ack send out, send A2:A2: Queries ack send Σ = { send, out, ack }  L(A 1 )\L(A w ) c  ∉ L(A 2 )

83 Output send ack out send Order err in out in Input in ack send Step 2: Output’ ² A 2 Counterexample: c=  send,send,out  Simulate c  =  send,send,out  on Input║Order err real error? Σ = { send, out, ack } Another Example ack send out, send A2:A2:

84 Output send ack out send Order err in out in Input in ack send Step 2: Output’ ² A 2 Counterexample: c=  send,send,out  Simulate c  =  send,send,out  on Input║Order err real error? Σ = { send, out, ack } Another Example Input ║ Order err 0, a out ack 1, a 2, a 0, b 1, b 2, b out send in ack out 0, a 1, b 2, b not a real error!   unreachable

85 Return c  to L* Output send ack out send Order err in out in Input in ack send Step 2: Output’ |=A 2 Counterexample: c=  send,send,out  Simulate c  =  send,send,out  on Input║Order err real error? Another Example ack,out,send ack send out ack send A4:A4: property Order holds on Input║Output’ not a real error! AwAw  L(A w )\L(A 2 ) Σ = { send, out, ack }   unreachable

86 Conclusion Generate assumptions for assume-guarantee reasoning in an incremental and fully automatic fashion, using learning. Each iteration of assume-guarantee may conclude that the required property is satisfied or violated in the system. Assumption generation converges to an assumption that is necessary and sufficient for the property to hold in the specific system.

87 The End