Augmenting Physical State Prediction Through Structured Activity Inference Nam Vo & Aaron Bobick ICRA 2015.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Learning from Demonstrations Jur van den Berg. Kalman Filtering and Smoothing Dynamics and Observation model Kalman Filter: – Compute – Real-time, given.
Patch to the Future: Unsupervised Visual Prediction
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
A Graphical Model For Simultaneous Partitioning And Labeling Philip Cowans & Martin Szummer AISTATS, Jan 2005 Cambridge.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Lecture 5: Learning models using EM
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
MSRC Summer School - 30/06/2009 Cambridge – UK Hybrids of generative and discriminative methods for machine learning.
Statistical inference form observational data Parameter estimation: Method of moments Use the data you have to calculate first and second moment To fit.
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Collaborative Ordinal Regression Shipeng Yu Joint work with Kai Yu, Volker Tresp and Hans-Peter Kriegel University of Munich, Germany Siemens Corporate.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Distributed Representations of Sentences and Documents
Scalable Text Mining with Sparse Generative Models
Thanks to Nir Friedman, HU
Real-Time Odor Classification Through Sequential Bayesian Filtering Javier G. Monroy Javier Gonzalez-Jimenez
Crash Course on Machine Learning
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Bayesian Inference Using JASP
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
Applications of Bayesian sensitivity and uncertainty analysis to the statistical analysis of computer simulators for carbon dynamics Marc Kennedy Clive.
Computer vision: models, learning and inference Chapter 19 Temporal models.
EM and expected complete log-likelihood Mixture of Experts
Abstract Developing sign language applications for deaf people is extremely important, since it is difficult to communicate with people that are unfamiliar.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Applied Bayesian Inference, KSU, April 29, 2012 § ❷ / §❷ An Introduction to Bayesian inference Robert J. Tempelman 1.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Model-based Bayesian Reinforcement Learning in Partially Observable Domains by Pascal Poupart and Nikos Vlassis (2008 International Symposium on Artificial.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Multifactor GPs Suppose now we wish to model different mappings for different styles. We will add a latent style vector s along with x, and define the.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
Tell Me What You See and I will Show You Where It Is Jia Xu 1 Alexander G. Schwing 2 Raquel Urtasun 2,3 1 University of Wisconsin-Madison 2 University.
CSC321: Neural Networks Lecture 16: Hidden Markov Models
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Computer vision: models, learning and inference Chapter 2 Introduction to probability.
Optimal Eye Movement Strategies In Visual Search.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Statistical Significance Hypothesis Testing.
Introduction to Gaussian Process CS 478 – INTRODUCTION 1 CS 778 Chris Tensmeyer.
CSC2535: Computation in Neural Networks Lecture 7: Independent Components Analysis Geoffrey Hinton.
Shape2Pose: Human Centric Shape Analysis CMPT888 Vladimir G. Kim Siddhartha Chaudhuri Leonidas Guibas Thomas Funkhouser Stanford University Princeton University.
LECTURE 33: STATISTICAL SIGNIFICANCE AND CONFIDENCE (CONT.)
Particle Filtering for Geometric Active Contours
Hidden Markov Models Part 2: Algorithms
A Large Scale Prediction Engine for App Install Clicks and Conversions
Joseph Xu Soar Workshop 31 June 2011
Reinforcement Learning with Partially Known World Dynamics
Boyu Wang and Minh Hoai Stony Brook University
Parametric Methods Berlin Chen, 2005 References:
Lecture 11 Generalizations of EM.
Probabilistic Modelling of Brain Imaging Data
Qiang Huo(*) and Chorkin Chan(**)
Tianhe Yu, Pieter Abbeel, Sergey Levine, Chelsea Finn
Presentation transcript:

Augmenting Physical State Prediction Through Structured Activity Inference Nam Vo & Aaron Bobick ICRA 2015

Structured Activity Long sequence composed of multiple actions with a temporal structure (defined by a grammar). Sequential Interval Network (SIN): recognize the sequence of actions predict the timing Segment the sequence temporally

Problem This paper extends SIN – predict the state (human position/movement) during the activity High level ideal: Learn the prior distribution of the state during each action + infer which action happens when  predict the state at any moment in time.

High level ideal Extend SIN framework Learn the prior distribution of the state during each action + infer which action happens when => predict the state at any moment in time. Demonstration: YAI

System Pipeline Training: SIN & state prior of primitive actions. Testing: given a partially observed sequence – Run a Dynamic System to get estimations. – Run SIN to get timing posteriors of actions. – Run the final inference to get state posteriors

The Graphical Model

The mapping

Inference, the simple case Assume the timings are known, that is all mapping between X and Y have been resolved. Use: posterior ~ Prior * Likelihood F_prior on X acts as prior F_obv on Y acts as likelihood Posterior is for both X and Y. It’s a Gaussian

Inference We don’t know the exact value of the timing, but we know its posterior (using SIN). Perform integral (weighted sum) on every possible timings. The posteriors of the state (X & Y) in this case will be mixtures of Gaussians.

TUM Kitchen Dataset Activity: setting a table (“robotic version”). – Defined by the grammar as a sequence of 14 primitive actions. – The subject moves back and forth to retrieve 7 objects Task: movement Prediction & smoothing.

TUM Kitchen Dataset Example of learnt prior distribution of the action get-spoon. The subject will move from the table (on the right) to the kitchen (on the left) in order to get a spoon inside the drawer.

TUM Kitchen Dataset Prediction task: running in streaming mode and predict the position in 7 points in the future

TUM Kitchen Dataset Prediction task: running in streaming mode and predict the position in 7 points in the future

TUM Kitchen Dataset Snapshot: prediction of the timing and position

TUM Kitchen Dataset Smoothing task

Toy Assembly Dataset Activity: assembly 1 of 3 different toy models. There’s 12 variations in the course of actions (defined by a grammar) and 40 different primitives actions (each is getting a part from 1 of 5 bins and assemble it). Task: predict active hand’s movement

Toy Assembly Dataset Example of learnt prior distribution during a particular action (getting a piece from bin 5 and assemble it)

Toy Assembly Dataset Parsing online: prediction of the timing and state

Toy Assembly Dataset Parsing online: prediction gets better

Conclusion Parsing structured activity: – Recognize the course of action & prediction of the timings – Prediction of the state (position/movement) Combine: – timing information – the prior, with respect to the action’s completion stage – the observation, with respect to time Output the posterior of the state: – w.r.t. action: X, useful for action analysis. – w.r.t. timestep: Y, useful for future prediction or smoothing. Questions?