CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5.2) Nov, 25, 2013.

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

Decision Theory: Sequential Decisions Computer Science cpsc322, Lecture 34 (Textbook Chpt 9.3) Nov, 28, 2012.
Lecture 11 Probability and Time
CPSC 322, Lecture 4Slide 1 Search: Intro Computer Science cpsc322, Lecture 4 (Textbook Chpt ) Sept, 11, 2013.
Department of Computer Science Undergraduate Events More
Slide 1 Planning: Representation and Forward Search Jim Little UBC CS 322 October 10, 2014 Textbook §8.1 (Skip )- 8.2)
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
CPSC 502, Lecture 11Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 11 Oct, 18, 2011.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
CPSC 322, Lecture 23Slide 1 Logic: TD as search, Datalog (variables) Computer Science cpsc322, Lecture 23 (Textbook Chpt 5.2 & some basic concepts from.
CPSC 322, Lecture 19Slide 1 Propositional Logic Intro, Syntax Computer Science cpsc322, Lecture 19 (Textbook Chpt ) February, 23, 2009.
Decision Theory: Single Stage Decisions Computer Science cpsc322, Lecture 33 (Textbook Chpt 9.2) March, 30, 2009.
CPSC 322, Lecture 37Slide 1 Finish Markov Decision Processes Last Class Computer Science cpsc322, Lecture 37 (Textbook Chpt 9.5) April, 8, 2009.
CPSC 322, Lecture 4Slide 1 Search: Intro Computer Science cpsc322, Lecture 4 (Textbook Chpt ) January, 12, 2009.
CPSC 322, Lecture 18Slide 1 Planning: Heuristics and CSP Planning Computer Science cpsc322, Lecture 18 (Textbook Chpt 8) February, 12, 2010.
CPSC 322, Lecture 30Slide 1 Reasoning Under Uncertainty: Variable elimination Computer Science cpsc322, Lecture 30 (Textbook Chpt 6.4) March, 23, 2009.
CPSC 322, Lecture 11Slide 1 Constraint Satisfaction Problems (CSPs) Introduction Computer Science cpsc322, Lecture 11 (Textbook Chpt 4.0 – 4.2) January,
CPSC 322, Lecture 23Slide 1 Logic: TD as search, Datalog (variables) Computer Science cpsc322, Lecture 23 (Textbook Chpt 5.2 & some basic concepts from.
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
CPSC 322, Lecture 17Slide 1 Planning: Representation and Forward Search Computer Science cpsc322, Lecture 17 (Textbook Chpt 8.1 (Skip )- 8.2) February,
CPSC 322, Lecture 17Slide 1 Planning: Representation and Forward Search Computer Science cpsc322, Lecture 17 (Textbook Chpt 8.1 (Skip )- 8.2) February,
CPSC 322, Lecture 31Slide 1 Probability and Time: Markov Models Computer Science cpsc322, Lecture 31 (Textbook Chpt 6.5) March, 25, 2009.
CPSC 322, Lecture 22Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Computer Science cpsc322, Lecture 22 (Textbook Chpt 5.2) March, 8, 2010.
CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5) March, 27, 2009.
Department of Computer Science Undergraduate Events More
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
CPSC 322, Lecture 35Slide 1 Value of Information and Control Computer Science cpsc322, Lecture 35 (Textbook Chpt 9.4) April, 14, 2010.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
CPSC 322, Lecture 24Slide 1 Reasoning under Uncertainty: Intro to Probability Computer Science cpsc322, Lecture 24 (Textbook Chpt 6.1, 6.1.1) March, 15,
Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Jim Little UBC CS 322 – CSP October 22, 2014.
Computer Science CPSC 322 Lecture 3 AI Applications 1.
CPSC 322, Lecture 22Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Computer Science cpsc322, Lecture 22 (Textbook Chpt 5.2) Oct, 26, 2010.
Department of Computer Science Undergraduate Events More
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule Jim Little Uncertainty 2 Nov 3, 2014 Textbook §6.1.3.
Department of Computer Science Undergraduate Events More
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
CPSC 322, Lecture 19Slide 1 (finish Planning) Propositional Logic Intro, Syntax Computer Science cpsc322, Lecture 19 (Textbook Chpt – 5.2) Oct,
CPSC 422, Lecture 17Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 17 Oct, 19, 2015 Slide Sources D. Koller, Stanford CS - Probabilistic.
CPSC 322, Lecture 4Slide 1 Search: Intro Computer Science cpsc322, Lecture 4 (Textbook Chpt ) Sept, 12, 2012.
Department of Computer Science Undergraduate Events More
CPSC 322, Lecture 2Slide 1 Representational Dimensions Computer Science cpsc322, Lecture 2 (Textbook Chpt1) Sept, 7, 2012.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
CPSC 322, Lecture 22Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Computer Science cpsc322, Lecture 22 (Textbook Chpt 5.2) Oct, 30, 2013.
Example: Localization for “Pushed around” Robot
Probability and Time: Hidden Markov Models (HMMs)
Reasoning Under Uncertainty: Belief Networks
Computer Science cpsc322, Lecture 4
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 2
Reasoning Under Uncertainty: Conditioning, Bayes Rule & Chain Rule
Planning: Representation and Forward Search
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Decision Theory: Single Stage Decisions
Planning: Representation and Forward Search
Probability and Time: Markov Models
Probability and Time: Markov Models
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 2
Decision Theory: Sequential Decisions
Decision Theory: Single Stage Decisions
Probability and Time: Markov Models
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Logic: Domain Modeling /Proofs + Computer Science cpsc322, Lecture 22
Probability and Time: Markov Models
Reasoning Under Uncertainty: Variable elimination
Planning: Representation and Forward Search
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5.2) Nov, 25, 2013

CPSC 322, Lecture 32Slide 2 Lecture Overview Recap Markov Models Markov Chain Hidden Markov Models

CPSC 322, Lecture 32Slide 3 Answering Queries under Uncertainty Static Belief Network & Variable Elimination Dynamic Bayesian Network Probability Theory Hidden Markov Models spam filters Diagnostic Systems (e.g., medicine) Natural Language Processing Student Tracing in tutoring Systems Monitoring (e.g credit cards) BioInformatics Markov Chains Robotics

Stationary Markov Chain (SMC) A stationary Markov Chain : for all t >0 P (S t+1 | S 0,…,S t ) = and P (S t +1 | We only need to specify and Simple Model, easy to specify Often the natural model The network can extend indefinitely Variations of SMC are at the core of most Natural Language Processing (NLP) applications! Slide 4CPSC 322, Lecture 32

Slide 5 Lecture Overview Recap Markov Models Markov Chain Hidden Markov Models

How can we minimally extend Markov Chains? Maintaining the Markov and stationary assumptions? A useful situation to model is the one in which: the reasoning system does not have access to the states but can make observations that give some information about the current state Slide 6CPSC 322, Lecture 32

Slide 7 B. h x h A. 2 x h C. k x h D. k x k

CPSC 322, Lecture 32Slide 8 Hidden Markov Model P (S 0 ) specifies initial conditions P (S t+1 |S t ) specifies the dynamics P (O t |S t ) specifies the sensor model A Hidden Markov Model (HMM) starts with a Markov chain, and adds a noisy observation about the state at each time step: |domain(S)| = k |domain(O)| = h B. h x h A. 2 x h C. k x h D. k x k

CPSC 322, Lecture 32Slide 9 Hidden Markov Model P (S 0 ) specifies initial conditions P (S t+1 |S t ) specifies the dynamics P (O t |S t ) specifies the sensor model A Hidden Markov Model (HMM) starts with a Markov chain, and adds a noisy observation about the state at each time step: |domain(S)| = k |domain(O)| = h

CPSC 322, Lecture 32Slide 10 Example: Localization for “Pushed around” Robot Localization (where am I?) is a fundamental problem in robotics Suppose a robot is in a circular corridor with 16 locations There are four doors at positions: 2, 4, 7, 11 The Robot initially doesn’t know where it is The Robot is pushed around. After a push it can stay in the same location, move left or right. The Robot has a Noisy sensor telling whether it is in front of a door

This scenario can be represented as… Example Stochastic Dynamics: when pushed, it stays in the same location p=0.2, moves one step left or right with equal probability P(Loc t + 1 | Loc t ) Loc t = 10 B. A. C.

CPSC 322, Lecture 32Slide 12 This scenario can be represented as… Example Stochastic Dynamics: when pushed, it stays in the same location p=0.2, moves left or right with equal probability P(Loc t + 1 | Loc t ) P(Loc 1 )

CPSC 322, Lecture 32Slide 13 This scenario can be represented as… Example of Noisy sensor telling whether it is in front of a door. If it is in front of a door P(O t = T) =.8 If not in front of a door P(O t = T) =.1 P(O t | Loc t )

Useful inference in HMMs Localization: Robot starts at an unknown location and it is pushed around t times. It wants to determine where it is In general: compute the posterior distribution over the current state given all evidence to date P(S t | O 0 … O t ) Slide 14CPSC 322, Lecture 32

Slide 15 Example : Robot Localization Suppose a robot wants to determine its location based on its actions and its sensor readings Three actions: goRight, goLeft, Stay This can be represented by an augmented HMM

CPSC 322, Lecture 32Slide 16 Robot Localization Sensor and Dynamics Model Sample Sensor Model (assume same as for pushed around) Sample Stochastic Dynamics:P(Loc t + 1 | Action t, Loc t ) P(Loc t + 1 = L | Action t = goRight, Loc t = L) = 0.1 P(Loc t + 1 = L+1 | Action t = goRight, Loc t = L) = 0.8 P(Loc t + 1 = L + 2 | Action t = goRight, Loc t = L) = P(Loc t + 1 = L’ | Action t = goRight, Loc t = L) = for all other locations L’ All location arithmetic is modulo 16 The action goLeft works the same but to the left

CPSC 322, Lecture 32Slide 17 Dynamics Model More Details Sample Stochastic Dynamics:P(Loc t + 1 | Action, Loc t ) P(Loc t + 1 = L | Action t = goRight, Loc t = L) = 0.1 P(Loc t + 1 = L+1 | Action t = goRight, Loc t = L) = 0.8 P(Loc t + 1 = L + 2 | Action t = goRight, Loc t = L) = P(Loc t + 1 = L’ | Action t = goRight, Loc t = L) = for all other locations L’

CPSC 322, Lecture 32Slide 18 Robot Localization additional sensor Additional Light Sensor: there is light coming through an opening at location 10 P (L t | Loc t ) Info from the two sensors is combined :“Sensor Fusion”

CPSC 322, Lecture 32Slide 19 The Robot starts at an unknown location and must determine where it is The model appears to be too ambiguous Sensors are too noisy Dynamics are too stochastic to infer anything /localization.html But inference actually works pretty well. Let’s check: You can use standard Bnet inference. However you typically take advantage of the fact that time moves forward (not in 322)

CPSC 322, Lecture 32Slide 20 Sample scenario to explore in demo Keep making observations without moving. What happens? Then keep moving without making observations. What happens? Assume you are at a certain position alternate moves and observations ….

CPSC 322, Lecture 32Slide 21 HMMs have many other applications…. Natural Language Processing: e.g., Speech Recognition States: phoneme \ word Observations: acoustic signal \ phoneme Bioinformatics: Gene Finding States: coding / non-coding region Observations: DNA Sequences For these problems the critical inference is: find the most likely sequence of states given a sequence of observations

CPSC 322, Lecture 32Slide 22 Markov Models Markov Chains Hidden Markov Model Markov Decision Processes (MDPs) Simplest Possible Dynamic Bnet Add noisy Observations about the state at time t Add Actions and Values (Rewards)

CPSC 322, Lecture 32Slide 23 Learning Goals for today’s class You can: Specify the components of an Hidden Markov Model (HMM) Justify and apply HMMs to Robot Localization Clarification on second LG for last class You can: Justify and apply Markov Chains to compute the probability of a Natural Language sentence (NOT to estimate the conditional probs- slide 18)

CPSC 322, Lecture 32Slide 24 Next week Environment Problem Query Planning Deterministic Stochastic Search Arc Consistency Search Value Iteration Var. Elimination Constraint Satisfaction Logics STRIPS Belief Nets Vars + Constraints Decision Nets Markov Decision Processes Var. Elimination Static Sequential Representation Reasoning Technique SLS Markov Chains and HMMs

CPSC 322, Lecture 32Slide 25 Next Class One-off decisions(TextBook 9.2) Single Stage Decision networks ( 9.2.1)

CPSC 322, Lecture 1Slide 26 People Instructor Giuseppe Carenini ( office CICSR 105) Teaching Assistants Kamyar Ardekani Tatsuro Oya Xin Ru (Nancy) Wang