Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.

Slides:



Advertisements
Similar presentations
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
Advertisements

Hidden Markov Models. Room Wandering I’m going to wander around my house and tell you objects I see. Your task is to infer what room I’m in at every point.
Dynamic Bayesian Networks (DBNs)
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Introduction of Probabilistic Reasoning and Bayesian Networks
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Advanced Artificial Intelligence
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
… Hidden Markov Models Markov assumption: Transition model:
Review: Bayesian learning and inference
Probabilistic inference
Bayes Rule How is this rule derived? Using Bayes rule for probabilistic inference: –P(Cause | Evidence): diagnostic probability –P(Evidence | Cause): causal.
Hidden Markov Model Special case of Dynamic Bayesian network Single (hidden) state variable Single (observed) observation variable Transition probability.
Lecture 5: Learning models using EM
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Conditional Random Fields
Modeling biological data and structure with probabilistic networks I Yuan Gao, Ph.D. 11/05/2002 Slides prepared from text material by Simon Kasif and Arthur.
CPSC 322, Lecture 31Slide 1 Probability and Time: Markov Models Computer Science cpsc322, Lecture 31 (Textbook Chpt 6.5) March, 25, 2009.
Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property).
Hidden Markov Models David Meir Blei November 1, 1999.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
CPSC 422, Lecture 18Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Feb, 25, 2015 Slide Sources Raymond J. Mooney University of.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
Rutgers CS440, Fall 2003 Introduction to Statistical Learning Reading: Ch. 20, Sec. 1-4, AIMA 2 nd Ed.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Review: Probability Random variables, events Axioms of probability
CHAPTER 15 SECTION 3 – 4 Hidden Markov Models. Terminology.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
CHAPTER 15 SECTION 1 – 2 Markov Models. Outline Probabilistic Inference Bayes Rule Markov Chains.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Homework 1 Reminder Due date: (till 23:59) Submission: – – Write the names of students in your team.
NLP. Introduction to NLP Sequence of random variables that aren’t independent Examples –weather reports –text.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
CS 416 Artificial Intelligence Lecture 17 Reasoning over Time Chapter 15 Lecture 17 Reasoning over Time Chapter 15.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
Dongfang Xu School of Information
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
CPS 170: Artificial Intelligence Markov processes and Hidden Markov Models (HMMs) Instructor: Vincent Conitzer.
Tracking with dynamics
Probability and Time. Overview  Modelling Evolving Worlds with Dynamic Baysian Networks  Simplifying Assumptions Stationary Processes, Markov Assumption.
Reasoning Under Uncertainty: Independence and Inference CPSC 322 – Uncertainty 5 Textbook §6.3.1 (and for HMMs) March 25, 2011.
Probability and Time. Overview  Modelling Evolving Worlds with Dynamic Baysian Networks  Simplifying Assumptions Stationary Processes, Markov Assumption.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Probabilistic Reasoning Over Time
Probabilistic Reasoning over Time
Probabilistic Reasoning over Time
Probability and Time: Markov Models
Hidden Markov Autoregressive Models
Probability and Time: Markov Models
CS 188: Artificial Intelligence Spring 2007
Probability and Time: Markov Models
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Chapter14-cont..
Probability and Time: Markov Models
Instructor: Vincent Conitzer
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian networks we’ve seen so far describe static situations –Each random variable gets a single fixed value in a single problem instance Now we consider the problem of describing probabilistic environments that evolve over time –Examples: robot localization, tracking, speech, …

Hidden Markov Models At each time slice t, the state of the world is described by an unobservable variable X t and an observable evidence variable E t Transition model: distribution over the current state given the whole past history: P(X t | X 0, …, X t-1 ) = P(X t | X 0:t-1 ) Observation model: P(E t | X 0:t, E 1:t-1 ) X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

Hidden Markov Models Markov assumption (first order) –The current state is conditionally independent of all the other states given the state in the previous time step –What does P(X t | X 0:t-1 ) simplify to? P(X t | X 0:t-1 ) = P(X t | X t-1 ) Markov assumption for observations –The evidence at time t depends only on the state at time t –What does P(E t | X 0:t, E 1:t-1 ) simplify to? P(E t | X 0:t, E 1:t-1 ) = P(E t | X t ) X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

state evidence Example

state evidence Example Transition model Observation model

An alternative visualization U R R t = TR t = F R t-1 = T R t-1 = F U t = TU t = F R t = T R t = F Transition model Observation model

Another example States: X = {home, office, cafe} Observations: E = {sms, facebook, } Slide credit: Andy White

The Joint Distribution Transition model: P(X t | X 0:t-1 ) = P(X t | X t-1 ) Observation model: P(E t | X 0:t, E 1:t-1 ) = P(E t | X t ) How do we compute the full joint P(X 0:t, E 1:t )? X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

HMM Learning and Inference Inference tasks –Filtering: what is the distribution over the current state X t given all the evidence so far, e 1:t –Smoothing: what is the distribution of some state X k given the entire observation sequence e 1:t ? –Evaluation: compute the probability of a given observation sequence e 1:t –Decoding: what is the most likely state sequence X 0:t given the observation sequence e 1:t ? Learning –Given a training sample of sequences, learn the model parameters (transition and emission probabilities)