Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic Reasoning over Time

Similar presentations


Presentation on theme: "Probabilistic Reasoning over Time"— Presentation transcript:

1 Probabilistic Reasoning over Time
Russell and Norvig: ch 15 CMSC421 – Fall 2005 [Edited by J. Wiebe]

2 Markov Processes for us
As class lead us, considered Markov processes just as mathematical motivation for the PageRank algorithm Impoverished, from an AI reasoning point of view In practice, Markov assumptions are made But reasoning is much richer in the context of probability models

3 Temporal Probabilistic Agent
environment agent ? sensors actuators t1, t2, t3, …

4 Time and Uncertainty The world changes, we need to track and predict it Examples: diabetes management, traffic monitoring Basic idea: copy state and evidence variables for each time step Xt – set of unobservable state variables at time t e.g., BloodSugart, StomachContentst Et – set of evidence variables at time t e.g., MeasuredBloodSugart, PulseRatet, FoodEatent Assumes discrete time steps

5 States and Observations
Process of change is viewed as series of snapshots, each describing the state of the world at a particular time Each time slice involves a set or random variables indexed by t: the set of unobservable state variables Xt the set of observable evidence variable Et The observation at time t is Et = et for some set of values et The notation Xa:b denotes the set of variables from Xa to Xb

6 Stationary Process/Markov Assumption
Markov Assumption: Xt depends on some previous Xis First-order Markov process: P(Xt|X0:t-1) = P(Xt|Xt-1) kth order: depends on previous k time steps Sensor Markov assumption: P(Et|X0:t, E0:t-1) = P(Et|Xt) Assume stationary process: transition model P(Xt|Xt-1) and sensor model P(Et|Xt) are the same for all t In a stationary process, the changes in the world state are governed by laws that do not themselves change over time

7 Complete Joint Distribution
Given: Transition model: P(Xt|Xt-1) Sensor model: P(Et|Xt) Prior probability: P(X0) Then we can specify complete joint distribution:

8 Example Raint-1 Raint Raint+1 Umbrellat-1 Umbrellat Umbrellat+1 Rt-1
P(Rt|Rt-1) T F 0.7 0.3 Raint-1 Raint Raint+1 Umbrellat-1 Umbrellat Umbrellat+1 Rt P(Ut|Rt) T F 0.9 0.2

9 Inference Tasks Filtering or monitoring: P(Xt|e1,…,et) computing current belief state, given all evidence to date Prediction: P(Xt+k|e1,…,et) computing prob. of some future state Smoothing: P(Xk|e1,…,et) computing prob. of past state (hindsight) Most likely explanation: arg maxx1,..xtP(x1,…,xt|e1,…,et) given sequence of observation, find sequence of states that is most likely to have generated those observations.

10 Examples Filtering: What is the probability that it is raining today, given all the umbrella observations up through today? Prediction: What is the probability that it will rain the day after tomorrow, given all the umbrella observations up through today? Smoothing: What is the probability that it rained yesterday, given all the umbrella observations through today? Most likely explanation: if the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings?

11 Filtering We use recursive estimation to compute P(Xt+1 | e1:t+1) as a function of et+1 and P(Xt | e1:t) We can write this as follows: This leads to a recursive definition f1:t+1 = FORWARD(f1:t:t,et+1)

12 Example from R&N p. 543

13 Smoothing Compute P(Xk|e1:t) for 0<= k < t
Using a backward message bk+1:t = P(Ek+1:t | Xk), we obtain P(Xk|e1:t) = f1:kbk+1:t The backward message can be computed using This leads to a recursive definition Bk+1:t = BACKWARD(bk+2:t,ek+1:t)

14 Example from R&N p. 545

15 Probabilistic Temporal Models
Hidden Markov Models (HMMs) Kalman Filters Dynamic Bayesian Networks (DBNs)


Download ppt "Probabilistic Reasoning over Time"

Similar presentations


Ads by Google