Probabilistic Reasoning over Time

Slides:



Advertisements
Similar presentations
UNIT IV: UNCERTAIN KNOWLEDGE AND REASONING
Advertisements

Probabilistic Reasoning over Time
Dynamic Bayesian Networks (DBNs)
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Introduction of Probabilistic Reasoning and Bayesian Networks
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.
… Hidden Markov Models Markov assumption: Transition model:
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
CPSC 322, Lecture 31Slide 1 Probability and Time: Markov Models Computer Science cpsc322, Lecture 31 (Textbook Chpt 6.5) March, 25, 2009.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Temporal Models Template Models Representation Probabilistic Graphical
CHAPTER 15 SECTION 1 – 2 Markov Models. Outline Probabilistic Inference Bayes Rule Markov Chains.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
CS 416 Artificial Intelligence Lecture 17 Reasoning over Time Chapter 15 Lecture 17 Reasoning over Time Chapter 15.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
CPSC 422, Lecture 15Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15 Oct, 14, 2015.
Decision Making Under Uncertainty CMSC 471 – Spring 2014 Class #12– Thursday, March 6 R&N, Chapters , material from Lise Getoor, Jean-Claude.
Probability and Time. Overview  Modelling Evolving Worlds with Dynamic Baysian Networks  Simplifying Assumptions Stationary Processes, Markov Assumption.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Decision Making Under Uncertainty CMSC 471 – Fall 2011 Class #23-24 – Thursday, November 17 / Tuesday, November 22 R&N, Chapters , ,
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
水分子不時受撞,跳格子(c.p. 車行) 投骰子 (最穩定) 股票 (價格是不穏定,但報酬過程略穩定) 地震的次數 (不穩定)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 16
Probabilistic reasoning over time
Probabilistic Reasoning
Probabilistic Robotics
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Markov ó Kalman Filter Localization
Probabilistic Reasoning over Time
Probabilistic Reasoning
Probability and Time: Markov Models
Hidden Markov Autoregressive Models
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 16
Probability and Time: Markov Models
CS 188: Artificial Intelligence Spring 2007
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 16
CSE-490DF Robotics Capstone
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Instructors: Fei Fang (This Lecture) and Dave Touretzky
CS 188: Artificial Intelligence Fall 2008
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15
Probability and Time: Markov Models
Probabilistic Reasoning
Probabilistic Reasoning
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Hidden Markov Models Markov chains not so useful for most agents
Class #16 – Tuesday, October 26
Chapter14-cont..
Probability and Time: Markov Models
Hidden Markov Models Lirong Xia.
Announcements Assignments HW10 Due Wed 4/17 HW11
Non-Standard-Datenbanken
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic reasoning over time
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

Probabilistic Reasoning over Time CMSC 671 – Fall 2010 Class #19 – Monday, November 8 Russell and Norvig, Chapter 15.1-15.2 material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller

Temporal Probabilistic Agent environment agent ? sensors actuators t1, t2, t3, …

Time and Uncertainty The world changes, we need to track and predict it Examples: diabetes management, traffic monitoring Basic idea: Copy state and evidence variables for each time step Model uncertainty in change over time, incorporating new observations as they arrive Reminiscent of situation calculus, but for stochastic domains Xt – set of unobservable state variables at time t e.g., BloodSugart, StomachContentst Et – set of evidence variables at time t e.g., MeasuredBloodSugart, PulseRatet, FoodEatent Assumes discrete time steps

States and Observations Process of change is viewed as series of snapshots, each describing the state of the world at a particular time Each time slice is represented by a set of random variables indexed by t: the set of unobservable state variables Xt the set of observable evidence variables Et The observation at time t is Et = et for some set of values et The notation Xa:b denotes the set of variables from Xa to Xb

Stationary Process/Markov Assumption Xt depends on some previous Xis First-order Markov process: P(Xt|X0:t-1) = P(Xt|Xt-1) kth order: depends on previous k time steps Sensor Markov assumption: P(Et|X0:t, E0:t-1) = P(Et|Xt) That is: The agent’s observations depend only on the actual current state of the world Assume stationary process: Transition model P(Xt|Xt-1) and sensor model P(Et|Xt) are time-invariant (i.e., they are the same for all t) That is, the changes in the world state are governed by laws that do not themselves change over time

Complete Joint Distribution Given: Transition model: P(Xt|Xt-1) Sensor model: P(Et|Xt) Prior probability: P(X0) Then we can specify complete joint distribution:

Example Rt-1 P(Rt|Rt-1) T F 0.7 0.3 Raint-1 Raint Raint+1 Umbrellat-1 Umbrellat Umbrellat+1 Rt P(Ut|Rt) T F 0.9 0.2 This should look a lot like a finite state automaton (since it is one...)

Inference Tasks Filtering or monitoring: P(Xt|e1,…,et) Compute the current belief state, given all evidence to date Prediction: P(Xt+k|e1,…,et) Compute the probability of a future state Smoothing: P(Xk|e1,…,et) Compute the probability of a past state (hindsight) Most likely explanation: arg maxx1,..xtP(x1,…,xt|e1,…,et) Given a sequence of observations, find the sequence of states that is most likely to have generated those observations

Examples Filtering: What is the probability that it is raining today, given all of the umbrella observations up through today? Prediction: What is the probability that it will rain the day after tomorrow, given all of the umbrella observations up through today? Smoothing: What is the probability that it rained yesterday, given all of the umbrella observations through today? Most likely explanation: If the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings?

Filtering We use recursive estimation to compute P(Xt+1 | e1:t+1) as a function of et+1 and P(Xt | e1:t) We can write this as follows: This leads to a recursive definition: f1:t+1 =  FORWARD (f1:t, et+1) QUIZLET: What is α?

Filtering Example Rt-1 P(Rt|Rt-1) T F 0.7 0.3 Raint-1 Raint Raint+1 Umbrellat-1 Umbrellat Umbrellat+1 Rt P(Ut|Rt) T F 0.9 0.2 What is the probability of rain on Day 2, given a uniform prior of rain on Day 0, U1 = true, and U2 = true?