Presentation is loading. Please wait.

Presentation is loading. Please wait.

HMMs Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew.

Similar presentations


Presentation on theme: "HMMs Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew."— Presentation transcript:

1 HMMs Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore, Percy Liang, Luke Zettlemoyer, Rob Pless, Killian Weinberger, Deva Ramanan 1

2 2048 Game Game AI (Minimax with alpha/beta pruning)

3 Announcements HW2 graded –mean = 14.40, median = 16 Grades to date after 2 HWs, 1 midterm (60% / 40%) –mean = 77.42, median = 81.36

4 Announcements HW4 will be online tonight (due April 3) Midterm2 next Thursday –Review will be Monday, 5pm in FB009

5 Information Session: BS/MS Program Thursday March 20 SN011 5:30-6:30

6 Recap from b.s.b.

7 Bayes Nets Directed Graph, G = (X,E) Nodes Edges Each node is associated with a random variable

8 Example Joint Probability:

9 Independence in a BN Important question about a BN: –Are two nodes independent given certain evidence? –If yes, can prove using algebra (tedious in general) –If no, can prove with a counter example –Example: –Question: are X and Z necessarily independent? Answer: no. Example: low pressure causes rain, which causes traffic. X can influence Z, Z can influence X (via Y) Addendum: they could be independent: how? XYZ

10 Bayes Ball Shade all observed nodes. Place balls at the starting node, let them bounce around according to some rules, and ask if any of the balls reach any of the goal node. We need to know what happens when a ball arrives at a node on its way to the goal node. 10

11 11

12 Inference Inference: calculating some useful quantity from a joint probability distribution Examples: –Posterior probability: –Most likely explanation: 12 BE A JM

13 Inference Algorithms Exact algorithms –Inference by Enumeration –Variable Elimination algorithm Approximate (sampling) algorithms –Forward sampling –Rejection sampling –Likelihood weighting sampling

14 Example BN – Naïve Bayes C – Class F - Features We only specify (parameters): prior over class labels how each feature depends on the class

15 HW4!

16 Slide from Dan Klein

17

18

19 Percentage of documents in training set labeled as spam/ham Slide from Dan Klein

20 In the documents labeled as spam, occurrence percentage of each word (e.g. # times “the” occurred/# total words). Slide from Dan Klein

21 In the documents labeled as ham, occurrence percentage of each word (e.g. # times “the” occurred/# total words). Slide from Dan Klein

22 Parameter estimation Parameter estimate: Parameter smoothing: dealing with words that were never seen or seen too few times –Laplacian smoothing: pretend you have seen every vocabulary word one more time (or m more times) than you actually did P(word | spam) = # of word occurrences in spam messages total # of words in spam messages P(word | spam) = # of word occurrences in spam messages + 1 total # of words in spam messages + V (V: total number of unique words)

23 Classification The class that maximizes:

24 Summary of model and parameters Naïve Bayes model: Model parameters: P(spam) P(¬spam) P(w 1 | spam) P(w 2 | spam) … P(w n | spam) P(w 1 | ¬spam) P(w 2 | ¬spam) … P(w n | ¬spam) Likelihood of spam prior Likelihood of ¬ spam

25 Classification In practice –Multiplying lots of small probabilities can result in floating point underflow

26 Classification In practice –Multiplying lots of small probabilities can result in floating point underflow –Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities.

27 Classification In practice –Multiplying lots of small probabilities can result in floating point underflow –Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities. –Since log is a monotonic function, the class with the highest score does not change.

28 Classification In practice –Multiplying lots of small probabilities can result in floating point underflow –Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities. –Since log is a monotonic function, the class with the highest score does not change. –So, what we usually compute in practice is:

29 Today: Adding time! 29

30 Reasoning over Time Often, we want to reason about a sequence of observations –Speech recognition –Robot localization –Medical monitoring –DNA sequence recognition Need to introduce time into our models Basic approach: hidden Markov models (HMMs) More general: dynamic Bayes’ nets 30

31 Reasoning over time Basic idea: –Agent maintains a belief state representing probabilities over possible states of the world –From the belief state and a transition model, the agent can predict how the world might evolve in the next time step –From observations and an emission model, the agent can update the belief state

32 Markov Model 32

33 Markov Models A Markov model is a chain-structured BN –Value of X at a given time is called the state –As a BN: …. P(X t |X t-1 )….. –Parameters: initial probabilities and transition probabilities or dynamics (specify how the state evolves over time) X2X2 X1X1 X3X3 X4X4

34 Conditional Independence Basic conditional independence: –Past and future independent of the present –Each time step only depends on the previous –This is called the (first order) Markov property Note that the chain is just a (growing) BN –We can always use generic BN reasoning on it if we truncate the chain at a fixed length X2X2 X1X1 X3X3 X4X4 34

35 Example: Markov Chain Weather: –States: X = {rain, sun} –Transitions: –Initial distribution: 1.0 sun –What’s the probability distribution after one step? rainsun 0.9 0.1 35 X2X2 X1X1 X3X3 X4X4

36 Mini-Forward Algorithm Question: What’s P(X) on some day t? sun rain sun rain sun rain sun rain Forward simulation 36

37 Example From initial observation of sun From initial observation of rain P(X 1 )P(X 2 )P(X 3 )P(X  ) P(X 1 )P(X 2 )P(X 3 )P(X  ) 37

38 Stationary Distributions If we simulate the chain long enough: –What happens? –Uncertainty accumulates –Eventually, we have no idea what the state is! Stationary distributions: –For most chains, the distribution we end up in is independent of the initial distribution –Called the stationary distribution of the chain –Usually, can only predict a short time out

39 Hidden Markov Model 39

40 Hidden Markov Models Markov chains not so useful for most agents –Eventually you don’t know anything anymore –Need observations to update your beliefs Hidden Markov models (HMMs) –Underlying Markov chain over states S –You observe outputs (effects) at each time step –As a Bayes’ net: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5

41 Example An HMM is defined by: –Initial distribution: –Transitions: –Emissions:

42 Conditional Independence HMMs have two important independence properties: –Markov hidden process, future depends on past via the present –Current observation independent of all else given current state Quiz: does this mean that observations are independent given no evidence? –[No, correlated by the hidden state] X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5

43 Real HMM Examples Speech recognition HMMs: –Observations are acoustic signals (continuous valued) –States are specific positions in specific words (so, tens of thousands) Machine translation HMMs: –Observations are words (tens of thousands) –States are translation options Robot tracking: –Observations are range readings (continuous) –States are positions on a map (continuous)

44 Filtering / Monitoring Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) over time We start with B(X) in an initial setting, usually uniform As time passes, or we get observations, we update B(X) The Kalman filter was invented in the 60’s and first implemented as a method of trajectory estimation for the Apollo program

45 Example: Robot Localization t=0 Sensor model: never more than 1 mistake Motion model: may not execute action with small prob. 10 Prob Example from Michael Pfeiffer

46 Example: Robot Localization t=1 10 Prob

47 Example: Robot Localization t=2 10 Prob

48 Example: Robot Localization t=3 10 Prob

49 Example: Robot Localization t=4 10 Prob

50 Example: Robot Localization t=5 10 Prob

51 Filtering Filtering – update belief state given all evidence (observations) to date Recursive estimation - given the result of filtering up to time t, the agent computes the result for t+1 from the new evidence: Filtering consists of two parts: –Projecting the current belief state distribution forward from t to t+1 –Updating the belief using the new evidence

52 Forward Algorithm Transition model Current belief state Emission model New belief state Normalizing constant used to make probabilities sum to 1.

53 Decoding: Finding the most likely sequence Query: most likely seq: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5 53 [video]

54 Speech Recognition with HMMs

55

56

57

58

59

60

61

62 Particle Filtering

63 Representation: Particles Our representation of P(X) is now a list of N particles (samples) P(x) approximated by number of particles with value x For now, all particles have a weight of 1 63 Particles: (3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (2,1) (3,3) (2,1)

64 Particle Filtering: Elapse Time Each particle is moved by sampling its next position based on the transition model This captures the passage of time

65 Particle Filtering: Observe Based on observations: –Similar to likelihood weighting, we down weight our samples based on the evidence –Note that, the probabilities won’t sum to one, since most have been down weighted

66 Particle Filtering: Resample Rather than tracking weighted samples, we resample N times, we choose from our weighted sample distribution (i.e. draw with replacement) This is equivalent to renormalizing the distribution Now the update is complete for this time step, continue with the next one Old Particles: (3,3) w=0.1 (2,1) w=0.9 (3,1) w=0.4 (3,2) w=0.3 (2,2) w=0.4 (1,1) w=0.4 (3,1) w=0.4 (2,1) w=0.9 (3,2) w=0.3 New Particles: (2,1) w=1 (3,2) w=1 (2,2) w=1 (2,1) w=1 (1,1) w=1 (3,1) w=1 (2,1) w=1 (1,1) w=1

67 Robot Localization In robot localization: –We know the map, but not the robot’s position –Observations may be vectors of range finder readings –State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store B(X) –Particle filtering is a main technique [Demos,Fox]DemosFox

68 SLAM SLAM = Simultaneous Localization And Mapping –We do not know the map or our location –Our belief state is over maps and positions! –Main techniques: Kalman filtering (Gaussian HMMs) and particle methods DP-SLAM, Ron Parr

69 Particle Filters for Localization and Video Tracking

70 End of Part II! Now we’re done with our unit on probabilistic reasoning Last part of class: machine learning & applications

71 Exact Inference 71

72 The Forward Algorithm We are given evidence at each time and want to know We can derive the following updates We can normalize as we go if we want to have P(x|e) at each time step, or just once at the end…

73 Online Belief Updates Every time step, we start with current P(X | evidence) We update for time: We update for evidence: The forward algorithm does both at once (and doesn’t normalize) Problem: space is |X| and time is |X| 2 per time step X2X2 X1X1 X2X2 E2E2

74 Best Explanation Queries Query: most likely seq: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5 74

75 State Path Trellis State trellis: graph of states and transitions over time Each arc represents some transition Each arc has weight Each path is a sequence of states The product of weights on a path is the seq’s probability Can think of the Forward algorithm as computing sums of all paths in this graph sun rain sun rain sun rain sun rain 75

76 Viterbi Algorithm sun rain sun rain sun rain sun rain 76

77 Example 77

78 Summary Filtering Push Belief through time Incorporate observations Inference: –Forward Algorithm (/ Variable Elimination) –Particle Filter (/ Sampling) Algorithms: –Forward Algorithm (Belief given evidence) –Viterbi Algorithm (Most likely sequence) 78

79 Inference Recap: Simple Cases E1E1 X1X1 X2X2 X1X1

80 Passage of Time Assume we have current belief P(X | evidence to date) Then, after one time step passes: Or, compactly: Basic idea: beliefs get “pushed” through the transitions –With the “B” notation, we have to be careful about what time step t the belief is about, and what evidence it includes X2X2 X1X1

81 Observation Assume we have current belief P(X | previous evidence): Then: Or: Basic idea: beliefs reweighted by likelihood of evidence Unlike passage of time, we have to renormalize E1E1 X1X1

82 Example HMM

83 The Forward Algorithm We are given evidence at each time and want to know We can derive the following updates We can normalize as we go if we want to have P(x|e) at each time step, or just once at the end…

84 Online Belief Updates Every time step, we start with current P(X | evidence) We update for time: We update for evidence: The forward algorithm does both at once Problem: space is |X| and time is |X| 2 per time step X2X2 X1X1 X2X2 E2E2

85 Exact Inference 85

86 Just like Variable Elimination 86 X2X2 e1e1 X1X1 X3X3 X4X4 ¬e 2 e3e3 e4e4 XEP (E|X) rainumbrella0.9 rainno umbrella0.1 sunumbrella0.2 sunno umbrella0.8 X1X1 P( X 1 ) rain0.1 sun0.9 XP (e|X) rain0.9 sun0.2 evidence X1X1 P( X 1,e 1 ) rain0.09 sun0.18 Emission probability:

87 Just like Variable Elimination 87 X2X2 X 1,e 1 X3X3 X4X4 ¬e2¬e2 e3e3 e4e4 X1X1 P( X 1,e 1 ) rain0.09 sun0.18 XtXt X t+1 P (X t+1 |X t ) rain 0.9 rainsun0.1 sunrain0.2 sun 0.8 Transition probability: X1X1 X2X2 P (X 2,X 1,e 1 ) rain 0.081 rainsun0.009 sunrain0.036 sun 0.144

88 Just like Variable Elimination 88 X 1,X 2,e 1 X3X3 X4X4 ¬e2¬e2 e3e3 e4e4 Marginalize out X 1... X1X1 X2X2 P (X 2,X 1, e 1 ) rain 0.081 rainsun0.009 sunrain0.036 sun 0.144 X2X2 P( X 2,e 1 ) rain0.117 sun0.153

89 Just like Variable Elimination 89 X 2,e 1 X3X3 X4X4 ¬e2¬e2 e3e3 e4e4 X2X2 P( X 2,e 1 ) rain0.117 sun0.153 XEP (E|X) rainumbrella0.9 rainno umbrella0.1 sunumbrella0.2 sunno umbrella0.8 Emission probability:

90 Just like Variable Elimination 90 X 2,e 1, ¬e 2 X3X3 X4X4 e3e3 e4e4

91 Approximate Inference 91

92 HMMs: MLE Queries HMMs defined by –States X –Observations E –Initial distr: –Transitions: –Emissions: Query: most likely explanation: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5 92

93

94 Particle Filtering Sometimes |X| is too big to use exact inference –|X| may be too big to even store B(X) –E.g. X is continuous –|X| 2 may be too big to do updates Solution: approximate inference –Track samples of X, not all values –Samples are called particles –Time per step is linear in the number of samples –But: number needed may be large –In memory: list of particles, not states This is how robot localization works in practice 0.00.1 0.0 0.2 0.00.20.5


Download ppt "HMMs Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew."

Similar presentations


Ads by Google