HMMs and Particle Filters. Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know.

Slides:



Advertisements
Similar presentations
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Exact Inference in Bayes Nets
Lirong Xia Probabilistic reasoning over time Tue, March 25, 2014.
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
Reasoning Under Uncertainty: Bayesian networks intro Jim Little Uncertainty 4 November 7, 2014 Textbook §6.3, 6.3.1, 6.5, 6.5.1,
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
Introduction to Hidden Markov Models
Hidden Markov Models Fundamentals and applications to bioinformatics.
Advanced Artificial Intelligence
Hidden Markov Models First Story! Majid Hajiloo, Aria Khademi.
Introduction to Sequence Models. Sequences Many types of information involve sequences: -Financial data: -DNA: -Robot motionRobot motion -Text: “Jack.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
10/28 Temporal Probabilistic Models. Temporal (Sequential) Process A temporal process is the evolution of system state over time Often the system state.
10/24  Exam on 10/26 (Lei Tang and Will Cushing to proctor)
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
. Class 5: HMMs and Profile HMMs. Review of HMM u Hidden Markov Models l Probabilistic models of sequences u Consist of two parts: l Hidden states These.
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
11/14  Continuation of Time & Change in Probabilistic Reasoning Project 4 progress? Grade Anxiety? Make-up Class  On Monday?  On Wednesday?
CS 188: Artificial Intelligence Fall 2006 Lecture 17: Bayes Nets III 10/26/2006 Dan Klein – UC Berkeley.
CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5) March, 27, 2009.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Learning Bayesian Networks
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Dynamic Bayesian Networks CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanningLearning.
Does Naïve Bayes always work?
1 EM for BNs Graphical Models – Carlos Guestrin Carnegie Mellon University November 24 th, 2008 Readings: 18.1, 18.2, –  Carlos Guestrin.
CHAPTER 15 SECTION 3 – 4 Hidden Markov Models. Terminology.
Homework 7: Sequence Models. 1. What model is most appropriate? For each prediction problem below, what model (e.g., linear regression, linear classifier,
Appendix B: An Example of Back-propagation algorithm
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Reasoning Under Uncertainty: Bayesian networks intro CPSC 322 – Uncertainty 4 Textbook §6.3 – March 23, 2011.
Reasoning Under Uncertainty: Independence and Inference Jim Little Uncertainty 5 Nov 10, 2014 Textbook §6.3.1, 6.5, 6.5.1,
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
CS 188: Artificial Intelligence Fall 2008 Lecture 19: HMMs 11/4/2008 Dan Klein – UC Berkeley 1.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5.2) Nov, 25, 2013.
CS 188: Artificial Intelligence Fall 2006 Lecture 18: Decision Diagrams 10/31/2006 Dan Klein – UC Berkeley.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
CHAPTER 8 DISCRIMINATIVE CLASSIFIERS HIDDEN MARKOV MODELS.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
CPS 170: Artificial Intelligence Markov processes and Hidden Markov Models (HMMs) Instructor: Vincent Conitzer.
Reasoning Under Uncertainty: Independence and Inference CPSC 322 – Uncertainty 5 Textbook §6.3.1 (and for HMMs) March 25, 2011.
Reasoning over Time  Often, we want to reason about a sequence of observations  Speech recognition  Robot localization  User attention  Medical monitoring.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
CPSC 7373: Artificial Intelligence Lecture 12: Hidden Markov Models and Filters Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
Does Naïve Bayes always work?
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Probabilistic Reasoning Over Time
Introduction to particle filter
Hidden Markov Models Part 2: Algorithms
CS 188: Artificial Intelligence Spring 2007
Introduction to particle filter
Hidden Markov Models Markov chains not so useful for most agents
Instructor: Vincent Conitzer
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

HMMs and Particle Filters

Observations and Latent States Markov models don’t get used much in AI. The reason is that Markov models assume that you know exactly what state you are in, at each time step. This is rarely true for AI agents. Instead, we will say that the agent has a set of possible latent states – states that are not observed, or known to the agent. In addition, the agent has sensors that allow it to sense some aspects of the environment, to take measurements or observations.

Hidden Markov Models Suppose you are the parent of a college student, and would like to know how studious your child is. You can’t observe them at all times, but you can periodically call, and see if your child answers. SleepStudy H1H1 H2H2 H3H3 … SleepStudy SleepStudy O1O1 O2O2 O3O3 Answer call or not? Answer call or not? Answer call or not?

Hidden Markov Models H1H1 H2H2 H3H3 … O1O1 O2O2 O3O3 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H2H2 H3H3 P(H 3 |H 2 ) Sleep 0.6 StudySleep0.5 H4H4 H3H3 P(H 4 |H 3 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H2H2 O2O2 P(O 2 |H 2 ) SleepAns0.1 StudyAns0.8 H3H3 O3O3 P(O 3 |H 3 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5 Here’s the same model, with probabilities in tables.

Hidden Markov Models HMMs (and MMs) are a special type of Bayes Net. Everything you have learned about BNs applies here. H1H1 H2H2 H3H3 … O1O1 O2O2 O3O3 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H2H2 H3H3 P(H 3 |H 2 ) Sleep 0.6 StudySleep0.5 H4H4 H3H3 P(H 4 |H 3 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H2H2 O2O2 P(O 2 |H 2 ) SleepAns0.1 StudyAns0.8 H3H3 O3O3 P(O 3 |H 3 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5

Quick Review of BNs for HMMs H1H1 O1O1 H1H1 H2H2

Hidden Markov Models H1H1 … O1O1 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5

Hidden Markov Models H1H1 O1O1 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5 H2H2 O2O2

Quiz: Hidden Markov Models H1H1 O1O1 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5 H2H2 O2O2 Suppose a parent calls twice, once at time step 1 and once at time step 2. The first time, the child does not answer, and the second time the child does. Now what is P(H 2 =Sleep)?

Answer: Hidden Markov Models H1H1 O1O1 H1H1 H2H2 P(H 2 |H 1 ) Sleep 0.6 StudySleep0.5 H1H1 O1O1 P(O 1 |H 1 ) SleepAns0.1 StudyAns0.8 H1H1 P(H 1 ) Sleep0.5 Study0.5 H2H2 O2O2 It’s a pain to calculate by Enumeration.

Quiz: Complexity of Enumeration for HMMs

Answer: Complexity of Enumeration for HMMs

Specialized Inference Algorithm: Dynamic Programming

Demo of HMM Robot Localization Youtube demo from Udacity.com’s AI course: iLy_rgY&feature=player_embedded 1-dimensional robot demo: EnYq8&feature=player_embedded

Particle Filter Demos Real robot localization with particle filter: rc&feature=player_embedded 1-dimensional case: CzU&feature=player_embedded

Particle Filter Algorithm Inputs: – set of particles S, each with location s i.loc and weight s i.w – Control vector u (where robot should move next) – Measurement vector z (sensor readings) Outputs: – New particles S’, for the next iteration

Particle Filter Algorithm