Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou.

Similar presentations


Presentation on theme: "Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou."— Presentation transcript:

1 Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou (RIACS/NASA Ames) Howard Barringer (U. of Manchester) Jamie Cobleigh (U. of Massachusetts Amherst/MathWorks) Mihaela Gheorghiu (U. of Toronto)

2 Thanks Eric Madelaine Monique Simonetti INRIA

3 Objective: An integrated environment that supports software development and verification/validation throughout the lifecycle; detect integration problems early, prior to coding Approach: Compositional (“divide and conquer”) verification, for increased scalability, at design level Use design level artifacts to improve/aid coding and testing Compositional Verification TestingDesignCodingRequirementsDeployment C1C1 C2C2 C1C1 C2C2 M1M1 M2M2 models implementations Cost of detecting/fixing defects increases Integration issues handled early Context

4 Compositional Verification M2M2 M1M1 A satisfies P? Check P on entire system: too many states! Use the natural decomposition of the system into its components to break-up the verification task Check components in isolation: Does M 1 satisfy P? –Typically a component is designed to satisfy its requirements in specific contexts / environments Assume-guarantee reasoning: –Introduces assumption A representing M 1 ’s “context” Does system made up of M 1 and M 2 satisfy property P?

5 Assume-Guarantee Rules M2M2 M1M1 A satisfies P? Simplest assume-guarantee rule – A SYM “discharge” the assumption 1.  A  M 1  P  2.  true  M 2  A  3.  true  M 1 || M 2  P  How do we come up with the assumption? (usually a difficult manual process) Solution: use a learning algorithm. Reason about triples:  A  M  P  The formula is true if whenever M is part of a system that satisfies A, then the system must also guarantee P

6 Outline Framework for learning based assume-guarantee reasoning [TACAS’03] –Automates rule A SYM Extension with symmetric [SAVCBS’03] and circular rules Extension with alphabet refinement [TACAS’07] Implementation and experiments Other extensions Related work Conclusions

7 Formalisms Components modeled as finite state machines (FSM) –FSMs assembled with parallel composition operator “||” Synchronizes shared actions, interleaves remaining actions A safety property P is a FSM –P describes all legal behaviors –P err – complement of P determinize & complete P with an “error” state; bad behaviors lead to error –Component M satisfies P iff error state unreachable in (M || P err ) Assume-guarantee reasoning –Assumptions and guarantees are FSMs –  A  M  P  holds iff error state unreachable in (A || M || P err )

8 Example Input insend ack Output outsend ack Order err in out in out ||

9 The Weakest Assumption Given component M, property P, and the interface of M with its environment, generate the weakest environment assumption WA such that:  WA  M  P  holds Weakest means that for all environments E:  true  M || E  P  IFF  true  E  WA 

10 Learning for Assume-Guarantee Reasoning Use an off-the-shelf learning algorithm to build appropriate assumption for rule A SYM Process is iterative Assumptions are generated by querying the system, and are gradually refined Queries are answered by model checking Refinement is based on counterexamples obtained by model checking Termination is guaranteed 1.  A  M 1  P  2.  true  M 2  A  3.  true  M 1 || M 2  P 

11 Learning with L* L* algorithm by Angluin, improved by Rivest & Schapire Learns an unknown regular language U (over alphabet  ) and produces a DFA A such that L(A) = U Uses a teacher to answer two types of questions Unknown regular language U L* conjecture: A i query: string s true false remove string t add string t output DFA A such that L(A) = U true false is s in U? is L(A i )=U?

12 Learning Assumptions Use L* to generate candidate assumptions  A = (  M 1   P)   M 2 L* query: string s true false  s  M 1  P  conjecture: A i  A i  M 1  P   true  M 2  A i  counterex. analysis  t/  A  M 1  P  true false (cex. t) true false false (cex. t) true remove cex. t/  A add cex. t/  A P holds in M 1 || M 2 P violated 1.  A  M 1  P  2.  true  M 2  A  3.  true  M 1 || M 2  P  Model Checking

13 Characteristics Terminates with minimal automaton A for U Generates DFA candidates A i : |A 1 | < | A 2 | < … < |A| Produces at most n candidates, where n = |A| # queries:  (kn 2 + n logm), –m is size of largest counterexample, k is size of alphabet

14 Order err in out in Output send ack out Example Input in ack send ack send out, send A2:A2: Computed Assumption

15 Extension to n components To check if M 1 || M 2 || … || M n satisfies P –decompose it into M 1 and M’ 2 = M 2 || … || M n –apply learning framework recursively for 2 nd premise of rule –A plays the role of the property At each recursive invocation for M j and M’ j = M j+1 || … || M n –use learning to compute A j such that  A i  M j  A j-1  is true  true  M j+1 || … || M n  A j  is true 1.  A  M 1  P  2.  true  M 2 || … || M n  A  3.  true  M 1 || M 2 … || M n  P 

16 Symmetric Rules Assumptions for both components at the same time –Early termination; smaller assumptions Example symmetric rule – S YM coA i = complement of A i, for i=1,2 Requirements for alphabets: –  P   M 1   M 2 ;  A i  (  M 1   M 2 )   P, for i =1,2 The rule is sound and complete Completeness needed to guarantee termination Straightforward extension to n components 1.  A 1  M 1  P  2.  A 2  M 2  P  3. L(coA 1 || coA 2 )  L(P)  true  M 1 || M 2  P 

17 Learning Framework for Rule S YM L*  A 1  M 1  P  L*  A 2  M 2  P  A1A1 A2A2 false L(coA 1 || coA 2 )  L(P) counterex. analysis true false P holds in M 1 ||M 2 P violated in M 1 ||M 2 add counterex. remove counterex. remove counterex. true

18 Circular Rule Rule C IRC – from [Grumberg&Long – Concur’91] Similar to rule A SYM applied recursively to 3 components –First and last component coincide –Hence learning framework similar Straightforward extension to n components 1.  A 1  M 1  P  2.  A 2  M 2  A 1  3.  true  M 1  A 2   true  M 1 || M 2  P 

19 Outline Framework for assume-guarantee reasoning [TACAS’03] –Uses learning algorithm to compute assumptions –Automates rule A SYM Extension with symmetric [SAVCBS’03] and circular rules Extension with alphabet refinement [TACAS’07] Implementation and experiments Other extensions Related work Conclusions

20 Assumption Alphabet Refinement Assumption alphabet was fixed during learning –  A = (  M 1   P)   M 2 [SPIN’06]: A subset alphabet –May be sufficient to prove the desired property –May lead to smaller assumption How do we compute a good subset of the assumption alphabet? Solution – iterative alphabet refinement Start with small (empty) alphabet Add actions as necessary Discovered by analysis of counterexamples obtained from model checking

21 Learning with Alphabet Refinement 1. Initialize Σ to subset of alphabet  A = (  M 1   P)   M 2 2. If learning with Σ returns true, return true and go to 4. (END) 3. If learning returns false (with counterexample c), perform extended counterexample analysis on c.  If c is real, return false and go to 4. (END) 1.If c is spurious, add more actions from  A to Σ and go to 2. 4. END

22 Extended Counterexample Analysis  true  M 2  A  false cex t  t/ Σ  M 1  P  false cex c  A = (  M 1   P)   M 2 Σ   A is the current alphabet  t/  A  M 1  P  Refiner: compare t/  A and c/  A true false Real error (c) different Add actions to Σ and restart learning true L* Original Cex. Analysis Σ=  A

23 Characteristics Initialization of Σ –Empty set or property alphabet  P   A Refiner –Compares t/  A and c/  A –Heuristics: AllDiff adds all actions in the symmetric difference of the trace alphabets Forward scans traces in parallel forward adding first action that differs Backward symmetric to previous Termination –Refinement produces at least one new action and the interface is finite Generalization to n components –Through recursive invocation

24 Implementation & Experiments Implementation in the LTSA tool –Learning using rules A SYM, S YM and C IRC –Supports reasoning about two and n components –Alphabet refinement for all the rules Experiments –Compare effectiveness of different rules –Measure effect of alphabet refinement –Measure scalability as compared to non-compositional verification

25 Case Studies Model of Ames K9 Rover Executive –Executes flexible plans for autonomy –Consists of main Executive thread and ExecCondChecker thread for monitoring state conditions –Checked for specific shared variable: if the Executive reads its value, the ExecCondChecker should not read it before the Executive clears it Model of JPL MER Resource Arbiter –Local management of resource contention between resource consumers (e.g. science instruments, communication systems) –Consists of k user threads and one server thread (arbiter) –Checked mutual exclusion between resources … K9 Rover MER Rover

26 Results Rule A SYM more effective than rules S YM and C IRC Recursive version of ASYM the most effective –When reasoning about more than two components Alphabet refinement improves learning based assume guarantee verification significantly Backward refinement slightly better than other refinement heuristics Learning based assume guarantee reasoning –Can incur significant time penalties –Not always better than non-compositional (monolithic) verification –Sometimes, significantly better in terms of memory

27 Case|A|MemTime|A|MemTimeMemTime MER 2408.6521.9061.231.601.040.04 MER 3501240.06--83.544.764.050.111 MER 4273101.59--109.6113.6814.291.46 MER 520078.10--1219.0335.2314.2427.73 MER 616284.95--1447.0991.82--600 K9 Rover112.651.8242.372.536.270.015 Analysis Results |A| = assumption size Mem = memory (MB) Time (seconds) -- = reached time (30min) or memory limit (1GB) A SYM A SYM + refinement Monolithic

28 Other Extensions Design-level assumptions used to check implementations in an assume- guarantee way [ICSE’04] –Allows for detection of integration problems during unit verification/testing Extension of SPIN model checker to perform learning based assume- guarantee reasoning [SPIN’06] –Our approach can use any model checker Similar extension for Ames Java PathFider tool – ongoing work –Support compositional reasoning about Java code/UML statecharts –Support for interface synthesis: compute assumption for M 1 for any M 2 Compositional verification of C code –Collaboration with CMU –Uses predicate abstraction to extract FSM’s from C components More info on my webpage –http://ase.arc.nasa.gov/people/pcorina/

29 Applications Support for compositional verification –Property decomposition –Assumptions for assume-guarantee reasoning Assumptions may be used for component documentation Software patches –Assumption used as a “patch” that corrects a component errors Runtime monitoring of environment –Assumption monitors actual environment during deployment –May trigger recovery actions Interface synthesis Component retrieval, component adaptation, sub-module construction, incremental re-verification, etc.

30 Related Work Assume-guarantee frameworks –Jones 83; Pnueli 84; Clarke, Long & McMillan 89; Grumberg & Long 91; … –Tool support: MOCHA; Calvin (static checking of Java); … We were the first to propose learning based assume guarantee reasoning; since then, other frameworks were developed: –Alur et al. 05, 06 – Symbolic BDD implementation for NuSMV (extended with hyper-graph partitioning for model decomposition) –Sharygina et al. 05 – Checks component compatibility after component updates –Chaki et al. 05 – Checking of simulation conformance (rather than trace inclusion) –Sinha & Clarke 07 – SAT based compositional verification using lazy learning –… Interface synthesis using learning: Alur et al. 05 Learning with optimal alphabet refinement –Developed independently by Chaki & Strichman 07 CEGAR – counterexample guided abstraction refinement –Our alphabet refinement is similar in spirit –Important differences: Alphabet refinement works on actions, rather than predicates Applied compositionally in an assume guarantee style Computes under-approximations (of assumptions) rather than behavioral over-approximations Permissive interfaces – Hezinger et al. 05 –Uses CEGAR to compute interfaces …

31 Conclusion and Future Work Learning based assume guarantee reasoning Uses L* for automatic derivation of assumptions Applies to FSMs and safety properties Asymmetric, symmetric, and circular rules –Can accommodate other rules Alphabet refinement to compute small assumption alphabets that are sufficient for verification Experiments –Significant memory gains –Can incur serious time overhead Should be viewed as a heuristic –To be used in conjunction with other techniques, e.g. abstraction Future work Look beyond safety (learning for infinitary regular sets) Optimizations to overcome time overhead –Re-use learning results across refinement stages CEGAR to compute assumptions as abstractions of environments More experiments


Download ppt "Learning Based Assume-Guarantee Reasoning Corina Păsăreanu Perot Systems Government Services, NASA Ames Research Center Joint work with: Dimitra Giannakopoulou."

Similar presentations


Ads by Google