Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Opportunity Knocks: A Community Navigation Aid Henry Kautz Don Patterson Dieter Fox Lin Liao University of Washington Computer Science & Engineering.
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
1 Vertically Integrated Seismic Analysis Stuart Russell Computer Science Division, UC Berkeley Nimar Arora, Erik Sudderth, Nick Hay.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Bayesian network inference
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Graphical Models for Mobile Robot Localization Shuang Wu.
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
CS 188: Artificial Intelligence Spring 2007 Lecture 14: Bayes Nets III 3/1/2007 Srini Narayanan – ICSI and UC Berkeley.
Learning Bayesian Networks
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
HCI / CprE / ComS 575: Computational Perception
Bayesian Filtering for Robot Localization
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Particle Filtering in Network Tomography
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
Markov Localization & Bayes Filtering
QUIZ!!  T/F: The forward algorithm is really variable elimination, over time. TRUE  T/F: Particle Filtering is really sampling, over time. TRUE  T/F:
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Probabilistic Robotics: Monte Carlo Localization
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Probabilistic Robotics Bayes Filter Implementations.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
Scientific Writing Abstract Writing. Why ? Most important part of the paper Number of Readers ! Make people read your work. Sell your work. Make your.
Computer Science, Software Engineering & Robotics Workshop, FGCU, April 27-28, 2012 Fault Prediction with Particle Filters by David Hatfield mentors: Dr.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Learning and Inferring Transportation Routines By: Lin Liao, Dieter Fox and Henry Kautz Best Paper award AAAI’04.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
CS Statistical Machine learning Lecture 24
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Lecture 2: Statistical learning primer for biologists
Inferring High-Level Behavior from Low-Level Sensors Donald J. Patterson, Lin Liao, Dieter Fox, and Henry Kautz.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
Week Aug-24 – Aug-29 Introduction to Spatial Computing CSE 5ISC Some slides adapted from the book Computing with Spatial Trajectories, Yu Zheng and Xiaofang.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Learning and Inferring Transportation Routines Lin Liao, Don Patterson, Dieter Fox, Henry Kautz Department of Computer Science and Engineering University.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Probabilistic Robotics
Markov Chain Monte Carlo
Particle Filter/Monte Carlo Localization
Presentation transcript:

Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280

Main References Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data (Liao L., Fox D., Hightower J., Kautz H., Shultz D.) – in International Conference on Intelligent Robots and Systems (2003) Inferring High-Level Behavior from Low-Level Sensors (Paterson D., Liao L., Fox D., Kautz H.) – In UBICOMP 2003 Learning and Inferring Transportation Routines (Liao L., Fox D., Kautz H.) – In AAAI 2004

Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

Motivation ACTIVITY COMPASS - software which indirectly monitors your activity and offers proactive advice to aid in successfully accomplishing inferred plans. Healthcare Monitoring Automated Planning Context Aware Computing Support

Research Goal To bridge the gap between sensor data and symbolic reasoning. To allow sensor data to help interpret symbolic knowledge. To allow symbolic knowledge to aid sensor interpretation.

Executive Summary GPS data collection 3 months, 1 user’s daily life Inference Engine Infers location and transportation “mode” on-line in real-time Learning Transportation patterns Results Better predictions Conceptual understanding of routines

Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

Tracking on a Graph Tracking person’s location and mode of transportation using street maps and GPS sensor data. Formally, the world is modeled as: graph G = (V,E), where: V is a set of vertices = intersections E is a set of directed edges = roads/foot paths

Example

Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

State Space Location Which street user is on. Position on that street Velocity GPS Offset Error Transportation Mode L = ‹L s, L p › O = ‹O x, O y › V M ε {BUS, CAR, FOOT} X = ‹ L s, L p, V, O x, O y, M ›

GPS as a Sensor GPS is not a trivial location sensor to use GPS has inherent inaccuracies: Atmospherics Satellite Geometry Multi-path propagation errors Signal blockages Using the data is even harder Resolution > 15m Coordinate mismatches

Dynamic Bayesian Networks Extension of a Markov Model Statistical model which handles Sensor Error Enormous but Structured State Spaces Probabilistic Temporal A single framework to manage all levels of abstraction

Model (I)

Model (II)

Model (III)

Dependencies

Inference We want to compute the posterior density:

Inference Particle Filtering A Technique for Solving DBNs Approximate Solutions Stochastic/ Monte Carlo In our case, a particle represents an instantiation of the random variables describing: the transportation mode: m t the location: l t (actually the edge e t ) the velocity: v t

Particle Filtering Step 1 (SAMPLING) Draw n samples X t-1 from the previous set S t-1 and generate n new samples X t according to the dynamics p(x t |x t-1 ) (i.e. motion model ) Step 2 (IMPORTANCE SAMPLING) assign each sample x t an importance weight according to the likelihood of the observation z t : ω t ≈ p(z t |x t ) Step 3 (RE-SAMPLING) draw samples with replacement according to the distribution defined by the importance weights, ω t

Motion Model – p(x t |x t-1 ) Advancing particles along the graph G Sample transportation mode m t from the distribution p(m t |m t-1,e t-1 ) Sample velocity v t from density p(v t |m t ) - (mixture of Gauss densities – see picture) Sample the location using current velocity: draw at random the traveled distance d (from a Gauss density centered at v t ). If the distance implies an edge transition then we select next edge e t with probability p(e t |e t-1,m t-1 ). Otherwise we stay on the same edge e t = e t-1

Animation Play short video clip

Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

Learning We want to learn from history the components of the motion model: p(e t |e t-1,m t-1 ) - is the transition probability on the graph, conditioned on the mode of transportation just prior to transitioning to the new edge p(m t |m t-1,e t-1 ) - is the transportation mode transition probability. It depends on the previous mode m t-1 and the location of the person described by the edge e t-1 Use the Monte Carlo version of EM algorithm

Learning At each iteration it performs both a forward and a backward (in time) particle filtering step. At each forward and backward filtering steps the algorithm counts the number of particles transiting between the different edges and modes. To obtain probabilities for different transitions, the counts of the forward and backward pass are normalized and then multiplied at the corresponding time slices.

Implementation Details (I) α t (e t,m t ) the number of particles on edge e t and in mode m t at time t in the forward pass of particle filtering β t (e t,m t ) the number of particles on edge e t and in mode m t at time t in the backward pass of particle filtering ξ t-1 (e t,e t-1,m t-1 ) probability of transiting from edge e t-1 to e t at time t-1 and in mode m t-1 ψ t-1 (m t,m t-1,e t-1 ) probability of transiting from mode m t-1 to m t on edge e t-1 at time t-1

Implementation Details (II) After we have ξ t-1 and ψ t-1 for all t from 2 to T, we can update the parameters as:

Implementation details (III)

E-step 1. Generate n uniformly distributed samples 2. Perform forward particle filtering a) Sampling: generate n new samples from the existing ones using current parameter estimation p(e t |e t-1,m t-1 ) and p(m t |m t-1,e t-1 ). b) Re-weight each sample, re-sample, count and save α t (e t,m t ). c) Move to next time slice ( t = t+1 ). 3. Perform backward particle filtering a) Sampling: generate n new samples from the existing ones using the backward parameter estimation p(e t-1 |e t,m t ) and p(m t-1 |m t,e t ). b) Re-weight each sample, re-sample, count and save β(e t,m t ). c) Move to previous time slice ( t = t-1 ).

M-step 1. Compute ξ t-1 (e t,e t-1,m t-1 ) and ψ t-1 (m t,m t-1,e t-1 ) using (5) and (6) and then normalize. 2. Update p(e t |e t-1,m t-1 ) and p(m t |m t-1,e t-1 ) using (7) and (8). LOOP: Repeat E-step and M-step using updated parameters until model converges.

Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

Dataset Single user 3 months of daily life Collected GPS position and velocity data at 2 and 10 second sample intervals Evaluation data was 29 “trips” - 12 hours of logs All continuous outdoor data Divided chronologically into 3 cross- validation groups

Goals Mode Estimation and Prediction Learning a motion model that would be able to estimate and predict the mode of transportation at any given instant. Location Prediction Learning a motion model that would be able to predict the location of the person into the future.

Results – Mode Estimation Model Mode Prediction Accuracy Decision Tree (supervised) 55% Prior w/o bus info 60% Prior w/ bus info 78% Learned 84%

Results – Mode Prediction Evaluate the ability to predict transitions between transportation modes. Table shows the accuracy in predicting qualitative change in transportation mode within 60 seconds of the actual transition (e.g. correctly predicting that the person goes off the bus). PRECISION: percentage of time when the algorithm predicts a transition that will actually occur. RECALL: percentage of real transitions that were correctly predicted.

Results – Mode Prediction Model Mode Transition Prediction Accuracy PrecisionRecall Decision Tree 2%83% Prior w/o bus info 6%63% Prior w/ bus info 10%80% Learned 40%80%

Results – Location Prediction

Conclusions We developed a single integrated framework to reason about transportation plans Probabilistic Successfully manages systemic GPS error We integrate sensor data with high level concepts such as bus stops We developed an unsupervised learning technique which greatly improves performance Our results show high predictive accuracy and interesting conceptual conclusions

Possible Future Work Craig’s “cookie” framework may provide the low-level sensor information. Try and formalize Craig’s problem in the context of dynamic probabilistic systems.