Autonomous Cyber-Physical Systems: Probabilistic Models

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Continuous Random Variables Chapter 5 Nutan S. Mishra Department of Mathematics and Statistics University of South Alabama.
1 Continuous random variables Continuous random variable Let X be such a random variable Takes on values in the real space  (-infinity; +infinity)  (lower.
CS433: Modeling and Simulation
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
TCOM 501: Networking Theory & Fundamentals
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Continuous Time Markov Chains and Basic Queueing Theory
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
Lecture 13 – Continuous-Time Markov Chains
DEPARTMENT OF HEALTH SCIENCE AND TECHNOLOGY STOCHASTIC SIGNALS AND PROCESSES Lecture 1 WELCOME.
Continuous Random Variable (1). Discrete Random Variables Probability Mass Function (PMF)
Introduction to stochastic process
Continuous Random Variables. For discrete random variables, we required that Y was limited to a finite (or countably infinite) set of values. Now, for.
Review of Probability and Random Processes
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
Lecture 11 – Stochastic Processes
Introduction to Stochastic Models GSLM 54100
Generalized Semi-Markov Processes (GSMP)
Tch-prob1 Chap 3. Random Variables The outcome of a random experiment need not be a number. However, we are usually interested in some measurement or numeric.
TELECOMMUNICATIONS Dr. Hugh Blanton ENTC 4307/ENTC 5307.
Intro. to Stochastic Processes
CPSC 531: Probability Review1 CPSC 531:Probability & Statistics: Review II Instructor: Anirban Mahanti Office: ICT 745
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Random Variables By: 1.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
The Exponential and Gamma Distributions
Availability Availability - A(t)
Industrial Engineering Dep
Probability.
Cumulative distribution functions and expected values
Appendix A: Probability Theory
CTMCs & N/M/* Queues.
V5 Stochastic Processes
Chapter 5 Statistical Models in Simulation
Lecture on Markov Chain
Autonomous Cyber-Physical Systems: Dynamical Systems
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
CSE 531: Performance Analysis of Systems Lecture 4: DTMC
CS723 - Probability and Stochastic Processes
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete time Markov Chain
Discrete time Markov Chain
CS723 - Probability and Stochastic Processes
Chapter 2. Random Variables
Experiments, Outcomes, Events and Random Variables: A Revisit
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Autonomous Cyber-Physical Systems: Probabilistic Models Spring 2019. CS 599. Instructor: Jyo Deshmukh This lecture also some sources other than the textbooks, full bibliography is included at the end of the slides.

Layout Markov Chains Continuous-time Markov Chains

Probabilistic Models Models for components that we studied so far were either deterministic or nondeterministic. The goal of such models is to represent computation or time-evolution of a physical phenomenon. These models do not do a great job of capturing uncertainty. We can usually model uncertainty using probabilities, so probabilistic models allow us to account for likelihood of environment behaviors Machine learning/AI algorithms also require probabilistic modelling!

Markov chains Stochastic process: finite or infinite collection of random variables, indexed by time Represents numeric value of some system changing randomly over time Value at each time point is random number with some distribution Distribution at any time may depend on some or all previous times Markov chain: special case of a stochastic process Markov property: A process satisfies the Markov property if it can make predictions of the future based only on its current state (i.e. future and past states of the process are independent) I.e. distribution of future values depends only on the current value/state

Discrete-time Markov chain (DTMC) Time-homogeneous MC : each step in the process takes the same time Discrete-Time Markov chain (DTMC), described as a tuple (𝑄, 𝑃,𝐼,𝐴𝑃,𝐿): 𝑄 is a finite set of states 𝑃:𝑄×𝑄→[0,1] is a transition probability function 𝐼: 𝑄→[0,1] is the initial distribution such that 𝑞∈𝑄 𝐼 𝑞 =1 𝐴𝑃 is a set of Boolean propositions, and 𝐿:𝑆→ 2 𝐴𝑃 is a function that assigns some subset of Boolean propositions to each state

Markov chain example: Driver modeling 0.3 𝑝, 𝑞 0.1 0.4 ¬𝑝,¬𝑞 Accelerate Constant Speed 0.2 𝑞: Checking cellphone 𝑝: Feeling sleepy 0.5 0.5 0.8 0.05 0.4 Idling Brake 𝑝,𝑞 0.5 ¬𝑝,𝑞 0.05 1 0.2

Markov chain: Transition probability matrix 0.3 𝑝, 𝑞 0.1 0.4 A C B I A C B I 0.3 0.2 0.4 0 0.1 0.4 0.5 0 0.4 0.05 0.05 0.5 0.8 0 0 0.8 ¬𝑝,¬𝑞 Accelerate Constant Speed 0.2 0.5 0.5 0.8 0.05 0.4 Idling Brake 𝑝,𝑞 0.5 ¬𝑝,𝑞 0.05 1 0.2

Markov Chain Analysis Transition probabilities matrix 𝑀, where 𝑀 𝑞, 𝑞 ′ =𝑃(𝑞, 𝑞 ′ ) Chapman-Kolmogorov Equation: Let 𝑃 𝑛 (𝑞, 𝑞 ′ ) denote probability of going from state 𝑞 to 𝑞′ in 𝑛 steps, then, 𝑃 𝑚+𝑛 𝑞, 𝑞 ′ = 𝑞′′ 𝑃 𝑚 𝑞, 𝑞 ′′ 𝑃 𝑛 𝑞′′, 𝑞 ′ Corollary: 𝑃 𝑘 q, q ′ = 𝑀 𝑘 𝑞, 𝑞 ′

Continuous Time Markov Chains Time in DTMC is discrete CTMCs: dense model of time transitions can occur at any time “dwell time” in a state is (negative) exponentially distributed An exponentially distributed random variable X with rate 𝜆>0, has probability density function (pdf) 𝑓 𝑋 𝑥 defined as follows: 𝑓 𝑋 𝑥 = 𝜆 𝑒 −𝜆𝑥 if x>0 0 if x≤0

Exponential distribution properties Cumulative distribution function (CDF) of 𝑋 is then: 𝐹 𝑋 𝑑 =𝑃 𝑋≤𝑑 = −∞ 𝑑 𝑓 𝑋 𝑥 𝑑𝑥 = 0 𝑑 𝜆 𝑒 −𝜆𝑥 𝑑𝑥 = − 𝑒 −𝜆𝑥 𝑑 0 =(1− e −𝜆𝑑 ) I.e. zero probability of doing transition out of a state in duration 𝑑=0, but probability becomes 1 as 𝑑→∞ Fun exercise: show that above CDF is memoryless, i.e. 𝑃 𝑋>𝑡+𝑑 𝑋>𝑡)=𝑃(𝑋>𝑑) Fun exercise 2: If 𝑋 and 𝑌 are r.v.s negatively exponentially distributed with rates 𝜆 and 𝜇, then 𝑃 𝑋≤𝑌 = 𝜆 𝜆+𝜇

CTMC example Tuple (𝑄,𝑃,𝐼,𝑟,𝐴𝑃,𝐿) 𝑄 is a finite set of states 𝑃:𝑄×𝑄→[0,1] is a transition probability function 𝐼: 𝑄→[0,1] is the init. dist. 𝑞∈𝑄 𝐼 𝑞 =1 𝐴𝑃 is a set of Boolean propositions, and 𝐿:𝑆→ 2 𝐴𝑃 is a function that assigns some subset of Boolean propositions to each state 𝑟 𝑞 :Q→ ℝ >0 is the exit-rate function Interpretation: Residence time in state 𝑞 neg. exp. dist. with rate 𝑟(𝑞) Bigger the exit-rate, shorter the average residence time 0.1 0.6 0.1 𝑙𝑎𝑛 𝑒 𝑖 𝑙𝑎𝑛 𝑒 𝑖+1 0.4 0.3 5 0.6 0.8 𝑙𝑎𝑛 𝑒 𝑖−1 0.2 0.5

CTMC example Transition rate 𝑅 𝑞, 𝑞 ′ =𝑃 𝑞, 𝑞 ′ 𝑟 𝑞 Transition 𝑞→𝑞′ is a r.v. neg. exp. dist. with rate 𝑅(𝑞, 𝑞 ′ ) Probability to go from state 𝑙𝑎𝑛 𝑒 𝑖+1 to 𝑙𝑎𝑛 𝑒 𝑖−1 is: 𝑃 𝑋 𝑖+1,𝑖 ≤ 𝑋 𝑖+1,𝑖+1 ∩ 𝑋 𝑖,𝑖−1 ≤ min⁡(𝑋 𝑖,𝑖 , 𝑋 𝑖,𝑖+1 ) 𝑅 𝑖+1,𝑖 𝑅 𝑖+1,𝑖+1 +𝑅(𝑖+1,𝑖) 𝑅 𝑖,𝑖−1 𝑅 𝑖,𝑖+1 +𝑅 𝑖,𝑖 +𝑅(𝑖,𝑖−1) What is the probability of changing to some lane from 𝑙𝑎𝑛 𝑒 𝑖 in 0,𝑡 seconds? 0 𝑡 𝑟 𝑙𝑎𝑛 𝑒 𝑖 𝑒 −𝑟 𝑙𝑎𝑛 𝑒 𝑖 𝑥 𝑑𝑥 =(1− 𝑒 −𝑟 𝑙𝑎𝑛 𝑒 𝑖 𝑡 ) 0.1 0.6 0.1 𝑙𝑎𝑛 𝑒 𝑖 𝑙𝑎𝑛 𝑒 𝑖+1 0.4 0.3 5 0.6 0.8 𝑙𝑎𝑛 𝑒 𝑖−1 0.2 0.5

Bibliography Baier, Christel, Joost-Pieter Katoen, and Kim Guldstrand Larsen. Principles of model checking. MIT press, 2008. Continuous Time Markov Chains: https://resources.mpi-inf.mpg.de/departments/rg1/conferences/vtsa11/slides/katoen/lec01_handout.pdf