Download presentation
Presentation is loading. Please wait.
Published byMark Townsend Modified over 9 years ago
1
QUIZ!! T/F: Rejection Sampling without weighting is not consistent. FALSE T/F: Rejection Sampling (often) converges faster than Forward Sampling. FALSE T/F: Likelihood weighting (often) converges faster than Rejection Sampling. TRUE T/F: The Markov Blanket of X contains other children of parents of X. FALSE T/F: The Markov Blanket of X contains other parents of children of X. TRUE T/F: GIBBS sampling requires you to weight samples by their likelihood. FALSE T/F: In GIBBS sampling, it is a good idea to reject the first M<N samples. TRUE Decision Networks: T/F: Utility nodes never have parents. FALSE T/F: Value of Perfect Information (VPI) is always non-negative. TRUE 1
2
CSE 511a: Artificial Intelligence Spring 2013 Lecture 19: Hidden Markov Models 04/10/2013 Robert Pless Via Kilian Q. Weinberger, slides adapted from Dan Klein – UC Berkeley
3
Recap: Decision Diagrams Weather Forecast Umbrella U AWU leavesun100 leaverain0 takesun20 takerain70 WP(W) sun0.7 rain0.3 FP(F|rain) good0.1 bad0.9 FP(F|sun) good0.8 bad0.2
4
Example: MEU decisions 4 Weather Forecast =bad Umbrella U AWU(A,W) leavesun100 leaverain0 takesun20 takerain70 WP(W|F=bad) sun0.34 rain0.66 Umbrella = leave Umbrella = take Optimal decision = take
5
Value of Information Assume we have evidence E=e. Value if we act now: Assume we see that E’ = e’. Value if we act then: BUT E’ is a random variable whose value is unknown, so we don’t know what e’ will be. Expected value if E’ is revealed and then we act: Value of information: how much MEU goes up by revealing E’ first: VPI == “Value of perfect information”
6
VPI Example: Weather 6 Weather Forecast Umbrella U AWU leavesun100 leaverain0 takesun20 takerain70 MEU with no evidence MEU if forecast is bad MEU if forecast is good FP(F) good0.59 bad0.41 Forecast distribution
7
VPI Properties Nonnegative Nonadditive ---consider, e.g., obtaining E j twice Order-independent 7
8
Now for something completely different 8
9
“Our youth now love luxury. They have bad manners, contempt for authority; they show disrespect for their elders and love chatter in place of exercise; they no longer rise when elders enter the room; they contradict their parents, chatter before company; gobble up their food and tyrannize their teachers.” 9
10
“Our youth now love luxury. They have bad manners, contempt for authority; they show disrespect for their elders and love chatter in place of exercise; they no longer rise when elders enter the room; they contradict their parents, chatter before company; gobble up their food and tyrannize their teachers.” – Socrates 469–399 BC 10
11
Adding time! 11
12
Reasoning over Time Often, we want to reason about a sequence of observations Speech recognition Robot localization User attention Medical monitoring Need to introduce time into our models Basic approach: hidden Markov models (HMMs) More general: dynamic Bayes’ nets 12
13
Markov Model 13
14
Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called the state As a BN: …. P(X t |X t-1 )….. Parameters: called transition probabilities or dynamics, specify how the state evolves over time (also, initial probs) X2X2 X1X1 X3X3 X4X4
15
Conditional Independence Basic conditional independence: Past and future independent of the present Each time step only depends on the previous This is called the (first order) Markov property Note that the chain is just a (growing) BN We can always use generic BN reasoning on it if we truncate the chain at a fixed length X2X2 X1X1 X3X3 X4X4 15
16
Example: Markov Chain Weather: States: X = {rain, sun} Transitions: Initial distribution: 1.0 sun What’s the probability distribution after one step? rainsun 0.9 0.1 This is a CPT, not a BN! 16
17
Mini-Forward Algorithm Question: probability of being in state x at time t? Slow answer: Enumerate all sequences of length t which end in s Add up their probabilities … 17
18
Mini-Forward Algorithm Question: What’s P(X) on some day t? An instance of variable elimination! sun rain sun rain sun rain sun rain Forward simulation 18
19
Example From initial observation of sun From initial observation of rain P(X 1 )P(X 2 )P(X 3 )P(X ) P(X 1 )P(X 2 )P(X 3 )P(X ) 19
20
Stationary Distributions If we simulate the chain long enough: What happens? Uncertainty accumulates Eventually, we have no idea what the state is! Stationary distributions: For most chains, the distribution we end up in is independent of the initial distribution Called the stationary distribution of the chain Usually, can only predict a short time out
21
Web Link Analysis PageRank over a web graph Each web page is a state Initial distribution: uniform over pages Transitions: With prob. c, uniform jump to a random page (dotted lines, not all shown) With prob. 1-c, follow a random outlink (solid lines) Stationary distribution Will spend more time on highly reachable pages E.g. many ways to get to the Acrobat Reader download page Somewhat robust to link spam Google 1.0 returned the set of pages containing all your keywords in decreasing rank, now all search engines use link analysis along with many other factors (PageRank actually getting less important over time) 21 Page, Lawrence and Brin, Sergey and Motwani, Rajeev and Winograd, Terry (1999) The PageRank Citation Ranking: Bringing Order to the Web. Technical Report. Stanford InfoLab.
22
Hidden Markov Model 22
23
Hidden Markov Models Markov chains not so useful for most agents Eventually you don’t know anything anymore Need observations to update your beliefs Hidden Markov models (HMMs) Underlying Markov chain over states S You observe outputs (effects) at each time step As a Bayes’ net: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5
24
Example An HMM is defined by: Initial distribution: Transitions: Emissions:
25
Ghostbusters HMM P(X 1 ) = uniform P(X|X’) = usually move clockwise, but sometimes move in a random direction or stay in place P(R ij |X) = same sensor model as before: red means close, green means far away. 1/9 P(X 1 ) P(X|X’= ) 1/6 0 1/2 0 000 X5X5 X2X2 R i,j X1X1 X3X3 X4X4 E5E5
26
Conditional Independence HMMs have two important independence properties: Markov hidden process, future depends on past via the present Current observation independent of all else given current state Quiz: does this mean that observations are independent given no evidence? [No, correlated by the hidden state] X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5
27
Real HMM Examples Speech recognition HMMs: Observations are acoustic signals (continuous valued) States are specific positions in specific words (so, tens of thousands) Machine translation HMMs: Observations are words (tens of thousands) States are translation options Robot tracking: Observations are range readings (continuous) States are positions on a map (continuous)
28
Filtering / Monitoring Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) over time We start with B(X) in an initial setting, usually uniform As time passes, or we get observations, we update B(X) The Kalman filter was invented in the 60’s and first implemented as a method of trajectory estimation for the Apollo program
29
Example: Robot Localization t=0 Sensor model: never more than 1 mistake Motion model: may not execute action with small prob. 10 Prob Example from Michael Pfeiffer
30
Example: Robot Localization t=1 10 Prob
31
Example: Robot Localization t=2 10 Prob
32
Example: Robot Localization t=3 10 Prob
33
Example: Robot Localization t=4 10 Prob
34
Example: Robot Localization t=5 10 Prob
35
Inference Recap: Simple Cases E1E1 X1X1 X2X2 X1X1
36
Passage of Time Assume we have current belief P(X | evidence to date) Then, after one time step passes: Or, compactly: Basic idea: beliefs get “pushed” through the transitions With the “B” notation, we have to be careful about what time step t the belief is about, and what evidence it includes X2X2 X1X1
37
Example: Passage of Time As time passes, uncertainty “accumulates” T = 1T = 2T = 5 Transition model: ghosts usually go clockwise
38
Observation Assume we have current belief P(X | previous evidence): Then: Or: Basic idea: beliefs reweighted by likelihood of evidence Unlike passage of time, we have to renormalize E1E1 X1X1
39
Example: Observation As we get observations, beliefs get reweighted, uncertainty “decreases” Before observationAfter observation
40
Example HMM
41
SS E S E
42
The Forward Algorithm We are given evidence at each time and want to know We can derive the following updates We can normalize as we go if we want to have P(x|e) at each time step, or just once at the end…
43
Online Belief Updates Every time step, we start with current P(X | evidence) We update for time: We update for evidence: The forward algorithm does both at once (and doesn’t normalize) Problem: space is |X| and time is |X| 2 per time step X2X2 X1X1 X2X2 E2E2
44
Next Lecture: Sampling! (Particle Filtering) 44
45
Belief Updates Every time step, we start with current P(X | evidence) We update for time: We update for evidence: The forward algorithm does both at once (and doesn’t normalize) Problem: space is |X| and time is |X| 2 per time step X2X2 X1X1 X2X2 E2E2
46
Filtering Filtering is the inference process of finding a distribution over X T given e 1 through e T : P( X T | e 1:t ) We first compute P( X 1 | e 1 ): For each t from 2 to T, we have P( X t-1 | e 1:t-1 ) Elapse time: compute P( X t | e 1:t-1 ) Observe: compute P(X t | e 1:t-1, e t ) = P( X t | e 1:t ) 46
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.