A testing scenario for probabilistic automata Marielle Stoelinga UC Santa Cruz Frits Vaandrager University of Nijmegen.

Slides:



Advertisements
Similar presentations
Model Checking Lecture 2. Three important decisions when choosing system properties: 1automata vs. logic 2branching vs. linear time 3safety vs. liveness.
Advertisements

1 Testing Stochastic Processes Through Reinforcement Learning François Laviolette Sami Zhioua Nips-Workshop December 9 th, 2006 Josée Desharnais.
Anthony Greene1 Simple Hypothesis Testing Detecting Statistical Differences In The Simplest Case:  and  are both known I The Logic of Hypothesis Testing:
January 7, 2015CS21 Lecture 21 CS21 Decidability and Tractability Lecture 2 January 7, 2015.
Process Algebra (2IF45) Probabilistic Process Algebra Suzana Andova.
Process Algebra (2IF45) Probabilistic Process Algebra Suzana Andova.
Luca de Alfaro Thomas A. Henzinger Ranjit Jhala UC Berkeley Compositional Methods for Probabilistic Systems.
A logic for true concurrency Paolo Baldan and Silvia Crafa Universita’ di Padova.
Finite Automata Section 1.1 CSC 4170 Theory of Computation.
CS21 Decidability and Tractability
Lexical Analysis III Recognizing Tokens Lecture 4 CS 4318/5331 Apan Qasem Texas State University Spring 2015.
Illustrating the power of “choose” and “forall” generating all and only the pairs vw  A* of different words v,w of same length (i.e. v  w and |v| = |w|
Chapter 9 Hypothesis Testing Testing Hypothesis about µ, when the s.t of population is known.
CS 310 – Fall 2006 Pacific University CS310 Turing Machines Section 3.1 November 6, 2006.
Quantum Automata Formalism. These are general questions related to complexity of quantum algorithms, combinational and sequential.
CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002.
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
Turing Machines CS 105: Introduction to Computer Science.
Regular Expressions and Automata Chapter 2. Regular Expressions Standard notation for characterizing text sequences Used in all kinds of text processing.
1212 Models of Computation: Automata and Processes Jos Baeten.
Ch. 9 Fundamental of Hypothesis Testing
Copyright © 1998, Triola, Elementary Statistics Addison Wesley Longman 1 Chapter 7 Hypothesis Testing 7-1 Overview 7-2 Fundamentals of Hypothesis Testing.
1 © Lecture note 3 Hypothesis Testing MAKE HYPOTHESIS ©
Section 9.1 Introduction to Statistical Tests 9.1 / 1 Hypothesis testing is used to make decisions concerning the value of a parameter.
Formal Language Finite set of alphabets Σ: e.g., {0, 1}, {a, b, c}, { ‘{‘, ‘}’ } Language L is a subset of strings on Σ, e.g., {00, 110, 01} a finite language,
Turing Machines A more powerful computation model than a PDA ?
Individual values of X Frequency How many individuals   Distribution of a population.
By: Er. Sukhwinder kaur.  What is Automata Theory? What is Automata Theory?  Alphabet and Strings Alphabet and Strings  Empty String Empty String 
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
TM Design Universal TM MA/CSSE 474 Theory of Computation.
Pushdown Automata (PDAs)
Reactive systems – general
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
Turing Machines Chapter 17. Languages and Machines SD D Context-Free Languages Regular Languages reg exps FSMs cfgs PDAs unrestricted grammars Turing.
Chi Square Analysis The chi square analysis allows you to use statistics to determine if your data “good” or not. In our fruit fly labs we are using laws.
Formal Methods for Software Engineering Part II: Modelling & Analysis of System Behaviour.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
MPRI 3 Dec 2007Catuscia Palamidessi 1 Why Probability and Nondeterminism? Concurrency Theory Nondeterminism –Scheduling within parallel composition –Unknown.
2G1516 Formal Methods2005 Mads Dam IMIT, KTH 1 CCS: Processes and Equivalences Mads Dam Reading: Peled 8.5.
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Foundations of (Theoretical) Computer Science Chapter 2 Lecture Notes (Section 2.2: Pushdown Automata) Prof. Karen Daniels, Fall 2010 with acknowledgement.
What is a Hypothesis? A hypothesis is a claim (assumption) about the population parameter Examples of parameters are population mean or proportion The.
T. Gregory BandyInteraction Machines SeminarFebruary 21, Union College - Computer Science Graduate Program Interaction Machines Are they more.
IE241 Final Exam. 1. What is a test of a statistical hypothesis? Decision rule to either reject or not reject the null hypothesis.
Process Algebra (2IF45) Analysing Probabilistic systems Dr. Suzana Andova.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Hypothesis Testing.  Hypothesis is a claim or statement about a property of a population.  Hypothesis Testing is to test the claim or statement  Example.
CS5270 Lecture 41 Timed Automata I CS 5270 Lecture 4.
MA/CSSE 474 Theory of Computation Decision Problems, Continued DFSMs.
7/7/20161 Formal Methods in software development a.a.2015/2016 Prof.Anna Labella.
Formal Methods in software development
Lecture 1 Theory of Automata
Chapter 9 Hypothesis Testing
Pumping Lemma Revisited
CSC 4170 Theory of Computation Nondeterminism Section 1.2.
Hierarchy of languages
Summary.
Turing Machines Chapter 17.
Discrete Event Simulation - 4
Chi Square Analysis The chi square analysis allows you to use statistics to determine if your data is “good” or not. In our fruit fly labs we are using.
Chi Square Analysis The chi square analysis allows you to use statistics to determine if your data is “good”. In our fruit fly labs we are using laws of.
CS21 Decidability and Tractability
Formal Methods in software development
Instructor: Aaron Roth
Chi-Squared AP Biology.
CSC 4170 Theory of Computation Nondeterminism Section 1.2.
Chapter 1 Regular Language
Recap lecture 37 New format for FAs, input TAPE, START, ACCEPT , REJECT, READ states Examples of New Format of FAs, PUSHDOWN STACK , PUSH and POP states,
CSC 4170 Theory of Computation Finite Automata Section 1.1.
Non Deterministic Automata
Presentation transcript:

A testing scenario for probabilistic automata Marielle Stoelinga UC Santa Cruz Frits Vaandrager University of Nijmegen

Characterization of process equivalences process equivalences –bisimulation / ready trace /... equivalence –algebraical / logical / denotional /... definitions are these reasonable ? –justify equivalence via testing scenarios / button pushing experiments –comparative concurrency theory –De Nicola & Hennesy [DH86], Milner [Mil80], van Glabbeek [vGl01] testing scenarios: -define intuitive notion of observation, fundamental -processes that cannot be distinguished by observation ce ´ : P ´ Q iff Obs(P) = Obs(Q) - ´ does not distinguish too much/too little P Q ´ ??

Characterization of process equivalences process equivalences –bisimulation / ready trace /... equivalence –algebraical / logical / denotional /... definitions are these reasonable ? –justify equivalence via testing scenarios / button pushing experiments testing scenarios: -define intuitive notion of observation, fundamental -processes that cannot be distinguished by observation are deemed to be equivalent -justify process equivalence ´ : P ´ Q iff Obs(P) = Obs(Q) - ´ does not distinguish too much/too little P Q ´ ??

Main results testing scenarios in non-probabilistic case –trace equivalence –bisimulation –... we define –observations of a probabilistic automaton (PA) –observe probabilities through statistical methods (hypothesis testing) characterization result –Obs(P) = Obs(Q) iff trd(P) = trd(Q), P, Q finitely branching trd(P) extension of traces for PAs. [Segala] justifies trace distr equivalence in terms of observations

Model for testing scenarios a machine M –a black box – inside: process described by LTS P display –showing current action buttons –for user interaction display machine buttons

Model for testing scenarios a an observer –records what s/he sees (over time) + buttons define Obs M (P): –observations of P = what observer records, if LTS P is inside M display machine buttons

Model for testing scenarios a processes (LTSs) with same observations are deemed to be equivalent characterization results: Obs M (P) = Obs M (Q) iff P ´ Q ´ does not distinguish too much/too little display machine buttons

Trace Machine (TM) a no buttons for interaction display shows current action

Trace Machine (TM) a no buttons for interaction display shows current action with P inside M, an observer sees one of , a, ab, ac Obs TM (P) = traces of P testing scenario justifies trace (language) equivalence bc a P

Trace Machine (TM) a no distinguishing observation between bc aa a bc

Trace Distribution Machine (TDM) reset button: start over repeat experiments each experiment yields trace of same length k (wlog) observe frequencies of traces d reset ht 1/2 dl P

Trace Distribution Machine (TDM) 100 exps, length 2 frequencies - hd tl tl hd tl.... hd hd48 tl 52 other 0(hh,dh,hl,...) d reset ht 1/2 dl P

Trace Distribution Machine (TDM) with many experiments: #hd ¼ # tl 100 exps, length 2 frequencies - hd tl tl hd tl.... hd hd48 tl 52 other 0 d reset ht 1/2 dl P not every seq of experiments is an observation

Trace Distribution Machine (TDM) with many experiments: #hd ¼ # tl 100 exps, length 2 frequencies - hd tl tl hd tl... hd hd48 tl 52 other 0 - hd hd hd hd.... hdhd 100 tl 0 other 0 d reset ht 1/2 dl P 2 Obs(P) freq likely 2 Obs(P) freq unlikely /

Trace Distribution Machine (TDM) nondeterministic choice choose one transition probabilistically in large outcomes: 1/2 #c + 1/3 #d ¼ #b use statistics: –b,b,b,....b 2 Obs(P) freqs likely –b,d,c,b,b,b,c,... 2 Obs(P) freqs unlikey d reset b d 1/3 3/4 cb 2/31/4 P /

Observations TDM Obs TDM (P) = {  |  is likely to be produced by P} a reset ht 1/2 dl P

Observations TDM perform m experiments (m resets) each experiment yields trace of length k (wlog) sequence  2 (Act k ) m Obs(P) = {  2 (Act k ) m |  is likely to be produced by P, k,m 2 N} what is likely? use hypothesis testing a reset ht 1/2 dl P

Which frequencies are likely? I have a sequence  = h,t,t,t,h,t,... 2 {h,t} 100 I claim: generated  with automaton P. do you believe me –if  contains 15 h’s? –if  contains 42 h’s? ht 1/2

Which frequencies are likely? I have a sequence  = h,t,t,t,h,t,... 2 {h,t} 100 I claim: generated  with automaton P. do you believe me –if  contains 15 h’s? –if  contains 42 h’s? use hypothesis testing: fix confidence level  2 (0,1) H 0 null hypothesis =  is generated by P ht 1/ K: accept H 0 reject H 0 #h

Which frequencies are likely? I have a sequence  = h,t,t,t,h,t,... 2 {h,t} 100 I claim: generated  with automaton P. do you believe me –if  contains 15 h’s? –if  contains 42 h’s? use hypothesis testing: fix confidence level  2 (0,1) H 0 null hypothesis =  is generated by P P H0 [K] > 1-  : prob on false rejection ·  P : H0 [K] minimal : prob on false acceptance minimal ht 1/ K: accept H 0 reject H 0 #h

Which frequencies are likely? I have a sequence  = h,t,t,t,h,t,... 2 {h,t} 100 I claim: generated  with automaton P. do you believe me –if  contains 15 h’s? NO –if  contains 42 h’s? YES use hypothesis testing: fix confidence level  2 (0,1) H 0 null hypothesis =  is generated by P P H0 [K] > 1-  : prob on false rejection ·  P : H0 [K] minimal : prob on false acceptance minimal ht 1/ K: accept H 0 reject H 0 #h

Example: Observations for  =0.05 Obs(P) = {  2 (Act k ) m |  is likely to be produced by P, k,m 2 N } for k = 1 and m = 100  2 (Act) 100 is an observation iff 40 · freq  (hd) · 60 ht 1/ K 1,100 60

Example: Observations for  =0.05 Obs(P) = {  2 (Act k ) m |  is likely to be produced by P, k,m 2 N } for k = 1 and m = 100  2 (Act) 100 is an observation iff 40 · freq  (hd) · 60 for k = 1 and m = 200  2 (Act) 200 is an observation iff 88 · freq  (h) · 112 etc.... ht 1/ K 1,

Example: Observations for  =0.05 Obs(P) = {  2 (Act k ) m |  is likely to be produced by P, k,m 2 N } for k = 1 and m = 100,  2 (Act) 100 is an observation iff 40 · freq  (hd) · 60 for k = 1 and m = 200  2 (Act) 200 is an observation iff 88 · freq  (h) · 112 etc.... ht 1/ K 1, exp freq E P allowed deviation  ( = 12) 0

With nondeterminism  = b,c,c,d,b,d,....,c 2 Obs(P) ?? a a 1/3 3/4 b c 1/42/3 P

With nondeterminism  = b,c,c,d,b,d,....,c 2 Obs(P) ?? fix schedulers: p 1, p 2, p 3... p 100 –p i = P[take left trans in experiment i] –1 - p i = P[take right trans in experiment i] H 0 :  is generated by P under p 1, p 2, p 3... p 100 critical section K p1,...,p100 a a 1/3 3/4 b c 1/42/3 P

With nondeterminism  = b,c,c,d,b,d,....,c 2 Obs(P) ?? fix schedulers: p 1, p 2, p 3... p 100 –p i = P[take left trans in experiment i] –1 - p i = P[take right trans in experiment i] H 0 :  is generated by P under p 1, p 2, p 3... p 100 critical section K p1,...,p100 a a 1/3 3/4 b c 1/42/3 P  2 Obs(P) iff  2 K p1,...,p100 for some p 1, p 2, p 3... p 100

With nondeterminism fix p 1, p 2, p 3... p 100 compute P p1, p2, p3... p100 [  ] for every  –e.g p i = ½, P p1, p2, p3... p100 [c,c...c] = (½*2/3) 100 expected frequency E p1,...,p100 for –c =  i 2/3 p i –d =  i 3/4 (1-p i ) –b =  i 1/3 p i + ¼ (1-p i ) as before: critical section K p1,...,p100 b b 1/3 p i ¾(1-p i )c d ¼(1-p i ) 2/3p i

With nondeterminism fix p 1, p 2, p 3... p 100 –compute P p1, p2, p3... p100 [  ] for every  expected frequency E as before: critical section K p1,...,p100 –H 0 :  is generated by P under p 1, p 2, p 3... p 100 –allow observations to deviate <  from E –K p1,...,p100 – = P p1, p2, p3... p100 [sphere  (E)] –with  minimal with P p1, p2, p3... p100 [sphere  (E)] >  b b 1/3 3/4c d 1/42/3

Example: Observations Observations for k =1, m = 100.  contains b,c only with 54 · freq  (c) · 78 –take p i = 1 for all i  contains b,d only with 62 · freq  (d) · 88 –take p i = 0 for all i b b 1/3 3/4c d 1/42/3

Example: Observations Observations for k =1, m = 100.  contains a,b only and 54 · freq  (c) · 78 –take p i = 1 for all i  contains b,d only and 62 · freq  (d) · 88 –take p i = 0 for all i Observations for k=1, m = · freq  (c) · 71 and 70 · freq  (d) · 80 – p i = ½ for all i (exp 66 c’s; 74 d’s; 60 b’s) –(these are not all observations; they form a sphere) b b 1/3 3/4c d 1/42/3

Main result TDM characterizes trace distr equivalence Obs TDM (P) = Obs TDM (Q) iff trd(P) = trd(Q) if P, Q are finitely branching justifies trace distribution equivalence in an observational way

Main result TDM characterizes trace distr equiv ´ TD Obs TDM (P) = Obs TDM (Q) iff trd(P) = trd(Q) “only if” part is immediate, “if”-part is hard. –find a distinguishing observation if P, Q have different trace distributions. IAP for P. Q fin branching –P, Q have the same infinite trace distrs iff P, Q have the same finite trace distrs the set of trace distrs is a polyhedron Law of large numbers –for random vars with different distributions

Observations  =0.05 Obs(P) = {  2 (Act k ) m |  likely to be produced by P} Obs(P) = {  2 (Act k ) m | freq_  in K} for k = 1 and m = 100,  2 (Act) 100 is an observation iff 40 · freq  (hd) · 60 ht 1/ K

Nondeterministic case \sigma = \beta_1,...\beta_m fixed adversaries take in expect_freq for \gamma\in Act^k, freq_\gamma(\beta) freq \in \ we consider only frequency of traces in an outcome ht 1/2

a! b! a? b? bc a

a! b! a?

Trace Distribution Machine (TDM) reset button: start over repeat experiments: yields sequence of traces in large outcomes: #hd ¼ # tl use statistics: –hd,hd,hd,...,hd 2 Obs(P) too unlikely –hd,tl,tl,hd,...tl,hd 2 Obs(P) likely d reset / ht 1/2 dl

ht exp freq

Process equivalences Q P

Testing scenario’s a a black box with display and buttons inside: process described by LTS P display: current action what do we see (over time)? Obs M (P) P, Q are deemed equivalent iff Obs M (Q) = Obs M (Q) desired characterization: display machine buttons

Observations  =0.05 Obs(P) = {  2 (Act k ) m |  is likely to be produced by P} for k = 1 and m = 99, expectation E = (33,33,33) Obs(P) = {  2 (Act) 99 | |  – E | < 15} ht 1/3