Download presentation
Presentation is loading. Please wait.
Published byTomasz DΔ browski Modified over 5 years ago
1
Autonomous Cyber-Physical Systems: Probabilistic Models
Spring CS 599. Instructor: Jyo Deshmukh This lecture also some sources other than the textbooks, full bibliography is included at the end of the slides.
2
Layout Markov Chains Continuous-time Markov Chains
3
Probabilistic Models Models for components that we studied so far were either deterministic or nondeterministic. The goal of such models is to represent computation or time-evolution of a physical phenomenon. These models do not do a great job of capturing uncertainty. We can usually model uncertainty using probabilities, so probabilistic models allow us to account for likelihood of environment behaviors Machine learning/AI algorithms also require probabilistic modelling!
4
Markov chains Stochastic process: finite or infinite collection of random variables, indexed by time Represents numeric value of some system changing randomly over time Value at each time point is random number with some distribution Distribution at any time may depend on some or all previous times Markov chain: special case of a stochastic process Markov property: A process satisfies the Markov property if it can make predictions of the future based only on its current state (i.e. future and past states of the process are independent) I.e. distribution of future values depends only on the current value/state
5
Discrete-time Markov chain (DTMC)
Time-homogeneous MC : each step in the process takes the same time Discrete-Time Markov chain (DTMC), described as a tuple (π, π,πΌ,π΄π,πΏ): π is a finite set of states π:πΓπβ[0,1] is a transition probability function πΌ: πβ[0,1] is the initial distribution such that πβπ πΌ π =1 π΄π is a set of Boolean propositions, and πΏ:πβ 2 π΄π is a function that assigns some subset of Boolean propositions to each state
6
Markov chain example: Driver modeling
0.3 π, π 0.1 0.4 Β¬π,Β¬π Accelerate Constant Speed 0.2 π: Checking cellphone π: Feeling sleepy 0.5 0.5 0.8 0.05 0.4 Idling Brake π,π 0.5 Β¬π,π 0.05 1 0.2
7
Markov chain: Transition probability matrix
0.3 π, π 0.1 0.4 A C B I A C B I Β¬π,Β¬π Accelerate Constant Speed 0.2 0.5 0.5 0.8 0.05 0.4 Idling Brake π,π 0.5 Β¬π,π 0.05 1 0.2
8
Markov Chain Analysis Transition probabilities matrix π, where π π, π β² =π(π, π β² ) Chapman-Kolmogorov Equation: Let π π (π, π β² ) denote probability of going from state π to πβ² in π steps, then, π π+π π, π β² = πβ²β² π π π, π β²β² π π πβ²β², π β² Corollary: π π q, q β² = π π π, π β²
9
Continuous Time Markov Chains
Time in DTMC is discrete CTMCs: dense model of time transitions can occur at any time βdwell timeβ in a state is (negative) exponentially distributed An exponentially distributed random variable X with rate π>0, has probability density function (pdf) π π π₯ defined as follows: π π π₯ = π π βππ₯ if x> if xβ€0
10
Exponential distribution properties
Cumulative distribution function (CDF) of π is then: πΉ π π =π πβ€π = ββ π π π π₯ ππ₯ = 0 π π π βππ₯ ππ₯ = β π βππ₯ π 0 =(1β e βππ ) I.e. zero probability of doing transition out of a state in duration π=0, but probability becomes 1 as πββ Fun exercise: show that above CDF is memoryless, i.e. π π>π‘+π π>π‘)=π(π>π) Fun exercise 2: If π and π are r.v.s negatively exponentially distributed with rates π and π, then π πβ€π = π π+π
11
CTMC example Tuple (π,π,πΌ,π,π΄π,πΏ) π is a finite set of states
π:πΓπβ[0,1] is a transition probability function πΌ: πβ[0,1] is the init. dist. πβπ πΌ π =1 π΄π is a set of Boolean propositions, and πΏ:πβ 2 π΄π is a function that assigns some subset of Boolean propositions to each state π π :Qβ β >0 is the exit-rate function Interpretation: Residence time in state π neg. exp. dist. with rate π(π) Bigger the exit-rate, shorter the average residence time 0.1 0.6 0.1 πππ π π πππ π π+1 0.4 0.3 5 0.6 0.8 πππ π πβ1 0.2 0.5
12
CTMC example Transition rate π
π, π β² =π π, π β² π π
Transition πβπβ² is a r.v. neg. exp. dist. with rate π
(π, π β² ) Probability to go from state πππ π π+1 to πππ π πβ1 is: π π π+1,π β€ π π+1,π+1 β© π π,πβ1 β€ minβ‘(π π,π , π π,π+1 ) π
π+1,π π
π+1,π+1 +π
(π+1,π) π
π,πβ1 π
π,π+1 +π
π,π +π
(π,πβ1) What is the probability of changing to some lane from πππ π π in 0,π‘ seconds? 0 π‘ π πππ π π π βπ πππ π π π₯ ππ₯ =(1β π βπ πππ π π π‘ ) 0.1 0.6 0.1 πππ π π πππ π π+1 0.4 0.3 5 0.6 0.8 πππ π πβ1 0.2 0.5
13
Bibliography Baier, Christel, Joost-Pieter Katoen, and Kim Guldstrand Larsen. Principles of model checking. MIT press, 2008. Continuous Time Markov Chains:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.