Symbolic dynamics of Markov chains P S Thiagarajan School of Computing National University of Singapore Joint work with: Manindra Agrawal, S Akshay, Blaise.

Slides:



Advertisements
Similar presentations
Auto-Generation of Test Cases for Infinite States Reactive Systems Based on Symbolic Execution and Formula Rewriting Donghuo Chen School of Computer Science.
Advertisements

Introduction The concept of transform appears often in the literature of image processing and data compression. Indeed a suitable discrete representation.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Lecture 24 MAS 714 Hartmut Klauck
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul.
UPPAAL Introduction Chien-Liang Chen.
Hybrid System Verification Synchronous Workshop 2003 A New Verification Algorithm for Planar Differential Inclusions Gordon Pace University of Malta December.
Hybrid Systems Presented by: Arnab De Anand S. An Intuitive Introduction to Hybrid Systems Discrete program with an analog environment. What does it mean?
Timed Automata.
Basic Structures: Sets, Functions, Sequences, Sums, and Matrices
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
An Introduction to Variational Methods for Graphical Models.
Data Compression.
Succinct Approximations of Distributed Hybrid Behaviors P.S. Thiagarajan School of Computing, National University of Singapore Joint Work with: Yang Shaofa.
Markov Chains 1.
1 Formal Models for Stability Analysis : Verifying Average Dwell Time * Sayan Mitra MIT,CSAIL Research Qualifying Exam 20 th December.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
11 - Markov Chains Jim Vallandingham.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
Markov Chains Lecture #5
Planning under Uncertainty
Discrete Abstractions of Hybrid Systems Rajeev Alur, Thomas A. Henzinger, Gerardo Lafferriere and George J. Pappas.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
TCOM 501: Networking Theory & Fundamentals
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Simulation.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
ESE601: Hybrid Systems Introduction to verification Spring 2006.
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
Monte Carlo Model Checking Scott Smolka SUNY at Stony Brook Joint work with Radu Grosu Main source of support: ARO – David Hislop.
Abstract Verification is traditionally done by determining the truth of a temporal formula (the specification) with respect to a timed transition system.
Model Checking LTL over (discrete time) Controllable Linear System is Decidable P. Tabuada and G. J. Pappas Michael, Roozbeh Ph.D. Course November 2005.
Analyzing iterated learning Tom Griffiths Brown University Mike Kalish University of Louisiana.
Flavio Lerda 1 LTL Model Checking Flavio Lerda. 2 LTL Model Checking LTL –Subset of CTL* of the form: A f where f is a path formula LTL model checking.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Signals and Systems March 25, Summary thus far: software engineering Focused on abstraction and modularity in software engineering. Topics: procedures,
Benjamin Gamble. What is Time?  Can mean many different things to a computer Dynamic Equation Variable System State 2.
Chapter 2 Mathematical preliminaries 2.1 Set, Relation and Functions 2.2 Proof Methods 2.3 Logarithms 2.4 Floor and Ceiling Functions 2.5 Factorial and.
Chapter 24 Sturm-Liouville problem Speaker: Lung-Sheng Chien Reference: [1] Veerle Ledoux, Study of Special Algorithms for solving Sturm-Liouville and.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Discrete Time Markov Chains
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Today’s Agenda  Quiz 4  Temporal Logic Formal Methods in Software Engineering1.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
SS 2017 Software Verification Probabilistic modelling – DTMC / MDP
Industrial Engineering Dep
Prof. Dr. Holger Schlingloff 1,2 Dr. Esteban Pavese 1
Section 4.1 Eigenvalues and Eigenvectors
Analytics and OR DP- summary.
Autonomous Cyber-Physical Systems: Dynamical Systems
Tim Holliday Peter Glynn Andrea Goldsmith Stanford University
Alternating tree Automata and Parity games
Randomized Algorithms Markov Chains and Random Walks
CSEP590 – Model Checking and Automated Verification
Discrete Controller Synthesis
Introduction to verification
Presentation transcript:

Symbolic dynamics of Markov chains P S Thiagarajan School of Computing National University of Singapore Joint work with: Manindra Agrawal, S Akshay, Blaise Genest

Acknowledgements Samarjit and Javier IAS TÜV SÜD Foundation

Agenda Study complex dynamical systems – verification of temporal logic specification High dimensional Continuous time; continuous value domains. – Differential equations – Hybrid automata Networks of such dynamical systems

Strategy Approximate the dynamics. – Discretize the time and value domains Today’s first talk will have this flavor. – Sample the dynamics and do a statistical analysis. Today’s second talk To handle large networks: Deterministic synchronizations Tomorrow’s talk

Probabilistic dynamics Illustrate these ideas in the setting of probabilistic dynamical systems. – Yield useful approximations in the presence intractability and lack of knowledge. – Scalability can be achieved through sampling (simulations) based statistical analysis techniques. Statistical model checking Key application: (Networks) of biochemical networks.

Study I: Symbolic dynamics of Markov chains Well-known probabilistic dynamical systems. Rich theory Widely applied. Our focus (finite state, discrete time): – Symbolic dynamics Discretize the probability value space [0, 1].

Symbolic dynamics of Markov chains Analyze the symbolic dynamics via: – classical (linear) temporal logic specifications. Probabilities are sneaked in through the atomic propositions. – Model checking methods Details in: – LICS’2012 – Journal paper under review.

Two views of Markov chains View I – a finite state probabilistic transition system. View II – a linear transform of probability distributions over the states.

The transition system view /5 3/5

The transition system view /5 3/

The transition system view /5 3/ /5 3/ /5 3/5 2/5

The transition system view Outcomes: Infinite paths Basic cylinder: The set of infinite paths that have a common finite prefix. Path space:  -algebra generated by the basic cylinders. Probabilities are assigned to basic cylinders in a natural way. Extends canonically to a probability measure over the path space.

The transition system view /5 3/ /5 3/ B Pr(B) = 1  2/5  1  1 = 2/5 B – The set of all paths that have the prefix

Measurable properties The state 3 will be visited infinitely often. PCTL, CTL, …. Baier, C. and Katoen, J.-P. Principles of Model Checking

VIEW II The Markov chain transforms (linearly) a given distribution over its nodes to a new one. The graph of the chain is represented as a stochastic matrix – (all row sums are 1) The Perron-Frobinius theory (for non-negative matrices) applies.

VIEW II Leads to natural sub-classes: – Irreducible and aperiodic, – periodic, … transient states, recurrent states…. Stationary distributions….

VIEW II /5 3/5

The second view /5 3/

Trajectory of distributions Markov chain as (linear)transformer of probability distributions Transition matrix M /5 3/

Trajectory of distributions /5 3/5 Transition matrix M Each initial distribution generates a trajectory of distributions:

Trajectory of distributions /5 3/5 Transition matrix M Each initial distribution generates a trajectory of distributions:

Trajectory of distributions Dynamics: – The set of trajectories generated from an initial set of distributions –This set can be infinite We can ask whether all (none) of the trajectories satisfy some desired dynamical property What we can ask in this setting is incomparable with View I specifiable properties

Trajectory of distributions Eventually the probability of being in 1 is always greater than being in 3. There is a future time at which 90% of the probability mass lies on {1, 4}.

Symbolic dynamics A trajectory: – an infinite sequence of probability distributions (over a finite set of nodes). – The alphabet of probability distributions -over a finite set of nodes- is (uncountably) infinite. Hence a trajectory is an infinite sequence over an infinite alphabet, But exactly tracking probability distributions may be neither necessary nor possible. – Bio-chemical networks – Sensor networks…

Symbolic dynamics We propose to reason about sequences of distributions in a symbolic way. – Using a finite alphabet of discretized distributions.

Symbolic dynamics A classical tradition dynamical systems theory. –Jacques Hadamard(1898), Morse and Hedlund (1921), …. –Shannon, Smale… –Significant applications in data storage and transmission (via coding theory), linear algebra.

Symbolic dynamics We know how to do this for: – Timed automata – In some restricted settings for hybrid automata. For finite state Markov chains this has been done under very restricted setting and unnatural restrictions (Chada et.al 2011, Korthikanti et.all 2010.) We shall instead give up on bisimulation-based equivalence classes: Instead, fix a discretization of [0, 1] – With no restrictions handle all finite state Markov chains.

Symbolic dynamics x F(x) F 2 (x) F 3 (x) a b c d e Finite alphabet!

Symbolic dynamics Each block is a letter. Each trajectory is now recorded as a sequence of letters taken from a finite alphabet. If each block bisimualtion equivalence class it is called a Markov partition! Study the system dynamics in terms of these sequences. –Sophic shift sequences –Shift sequences of finite type.

Symbolic dynamics We know how to get “Markov partitions”) do this for: – Timed automata – For hybrid automata, in a few settings. For finite state Markov chains this has been done under very restricted setting and unnatural restrictions (Chada et.al 2011, Korthikanti et.all 2010.) We do not look for bisimulations of finite index.: Instead, we fix a discretization of [0, 1] – Handle all finite state Markov chains ; no restrictions.

Symbolic dynamics of Markov chains Discretization – We partition [0,1] into a finite set of intervals, a b c

Symbolic dynamics of Markov chains Each (probability) value is mapped to (identified with) the interval it falls in. a b c

Symbolic dynamics of Markov chains 

Each distribution maps to D, a unique n-tuple of intervals. ∈ D means Γ() = D = D 

Symbolic dynamics of Markov chains Each distribution maps to D, a unique n-tuple of intervals. Several distributions can map to the same D, in fact Γ -1 (D) can be infinite = D  

Symbolic dynamics of Markov chains Discretization – We partition [0,1] into a finite set of intervals, Γ -1 (D) can be empty = D ? 

Symbolic dynamics of Markov chains Discretized distribution – A tuple of intervals D is discretized distribution ( - distribution for short ) if Γ -1 (D) ≠ ø = D ø ×  

Symbolic dynamics of Markov chains The set of discretized distributions is finite

Symbolic dynamics of Markov chains A Trajectory of M starting from a distribution : Induces a symbolic trajectory – a word over ω ξ = (Γ() Γ( 1 ) Γ( 2 ) ….)

Symbolic dynamics of Markov chains /5 3/ 

Symbolic dynamics of Markov chains Fix a set of initial distributions IN. We use a - distribution D in to specify IN. That is IN = D in This can be an infinite set. – IN = =, ……….. We can fix the set of initial distributions in many other ways.

Symbolic dynamics of Markov chains The symbolic dynamics of (M, D in ) is: We wish to reason about this  -language.

Symbolic dynamics of Markov chains The discretization need not be uniform Can be different for each dimension (node). {[0, 1]} can be used to mask out “don’t care” nodes. – Dimension-reduction.

A temporal logic to reason about the symbolic dynamics I, a discretization of [0, 1] : – is an atomic proposition – node i interval d – “In the current distribution, (i) falls in the interval d ”. – In the current discretized distribution D, D(i) = d Probabilistic linear-time temporal logic LTL I

The verification problem Given, – a Markov chain M, – a discretization I, – an initial set of distributions represented as D in, – and a specification φ as an LTL I formula, Determine if M, D in ⊨ φ. In other words, –  (0) ⊨  for every  in L M – Does every symbolic trajectory of M satisfy φ, i.e., – is it the case L M ⊆ L φ ?

Example formulas Whenever probability of node i is “high”, the probability of node j is “low” : The -distribution (d 1, d 2,..., d n ) appears infinitely often:

Example formulas Extending with FO theory of reals, we can express much more: – e.g., Infinitely often, the probability of node i is at least twice the sum of probabilities of all other nodes. Logics based on path spaces (PCTL etc.) and logics based sequences of probability distributions are incomparable

Examples We have modeled a simple version of the Google pageranking algorithms. We have also modeled a small pharmacokinetics system for drug delivery.

The verification problem Given, M, I, D in, φ, determine if M, D in ⊨ φ, i.e., L M ⊆ L φ – If L M is ω -regular and effectively computable then we can use standard model checking techniques. – But L M is not always ω -regular !

The verification problem L M is not always ω -regular.

- approximations Fix a small > 0 (fraction of length of an interval of I ).. ’

- approximations Fix a small > 0 (fraction of length of an interval of I ). D.. ’

- approximations Fix a small > 0(fraction of length of an interval of I ) The -nbhd of a concrete distribution is the set of all - distributions which are atmost -away from it.. D.. ’

- approximations The -nbhd of a concrete distribution is the set of all - distributions which are atmost -away from it. – Nbr  (  ) is an  -neighborhood iff there exists  such that Nbr  (  ) =

- approximations Suppose D, D’ belong to a final class. Then there exist  in D and  ’ in D’ such that  ( ,  ’)  2 Suppose D, D’ belong to a final class. Then for every  in D and  ’ in D’,  ( ,  ’)  2 +  where  depends only on the discretization.

DD D D’  ’ ’ 22

A basic property of the symbolic dynamics There exists a computable constant K and a computable ordered collection of -nbhds { 0, 1,..., } called final classes such that: – After K steps, ξ will forever cycle through the ordered final classes. In other words, ξ (k) ∈ k mod for k > K

A basic property of the symbolic dynamics There exists a computable constant K and an ordered collection of -nbhds { 0, 1,..., } called final classes such that: – After K steps, ξ will forever cycle through the ordered final classes. In other words, ξ (k) ∈ k mod for k > K 1 2 MM K steps M M M

A basic property of the symbolic dynamics There exists a computable constant K and an ordered collection of -nbhds { 0, 1,..., } called final classes such that: – After K steps, ξ will forever cycle through the ordered final classes. In other words, ξ (k) ∈ k mod for k > K 1 2 MM K steps M M M M

- approximations of trajectories A symbolic trajectory ξ’ is an - approximation of ξ (the symbolic trajectory gen. by ) iff: – ξ (k)= ξ’ (k) for all 0 ≤ k ≤ K, – For all k> K, ξ’ (k) and ξ (k) belong to the same final class.

Two approximate verification problems Determine if – M, D in ⊨ φ, i.e., for every ∈ D in, there exists an -approximation of ξ in L φ. – M, D in ⊨ φ, i.e., for every ∈ D in, every -approximation of ξ is in L φ.

Two approximate verification problems Proposition – M, D in ⊨ φ implies L M ⊆ L φ – M, D in ⊭ φ implies L M ⊈ L φ

Two approximate verification problems If M, D in ⊨ φ and M, D in ⊭ φ then M - approximately meets φ. If we want to do better we can set ’ = /2 and iterate (incremental overhead).

The main result Theorem: Checking M, D in ⊨ φ and M, D in ⊨ φ are both effectively solvable problems.

The main result The proof first assumes a single initial distributions and proceeds to establish the theorem for: – Irreducible aperiodic chains – Irreducible periodic chains. – General chains This is then extended to a set of initial distributions D in. The main step is the transient + steady state characterization of the symbolic dynamics.

The main result

Additional results Convex hull of a finite set of distributions can also be handled. The atomic propositions can be constraints expressed in the first order theory of reals. Restrictions on the eigenvalues can yield  -regular behaviors (egs.: distinct and real)

Summary Discretizing the values space [0, 1] leads to an interesting and useful symbolic dynamics of finite state Markov chains. Extensions: – Optimizations – Implementations – Case studies – Interval Markov chains