Prof. Trevor Darrell Lecture 19: Tracking

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
1 Schedule. May 4, Monday, May 4: –Lecture on tracking objects and people. Wednesday, May 6: –Lecture on writing papers and giving talks. Monday,
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Reducing Drift in Parametric Motion Tracking
Segmentation and Fitting Using Probabilistic Methods
Observers and Kalman Filters
Visual Recognition Tutorial
Formation et Analyse d’Images Session 8
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Tracking with Online Appearance Model Bohyung Han
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle filters (continued…). Recall Particle filters –Track state sequence x i given the measurements ( y 0, y 1, …., y i ) –Non-linear dynamics –Non-linear.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Today Introduction to MCMC Particle filters and MCMC
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Computer Vision Linear Tracking Jan-Michael Frahm COMP 256 Some slides from Welch & Bishop.
Lecture II-2: Probability Review
Overview and Mathematics Bjoern Griesbach
Slam is a State Estimation Problem. Predicted belief corrected belief.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Crash Course on Machine Learning
Bayesian Filtering for Robot Localization
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Machine Learning Queens College Lecture 3: Probability and Statistics.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
An Introduction to Kalman Filtering by Arthur Pece
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
EECS 274 Computer Vision Tracking. Motivation: Obtain a compact representation from an image/motion sequence/set of tokens Should support application.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Tracking with dynamics
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
From: Variational Integrators for Structure-Preserving Filtering
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
Tracking Objects with Dynamics
Classification of unlabeled data:
The Maximum Likelihood Method
Probabilistic Robotics
Introduction to particle filter
Introduction to particle filter
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
EE513 Audio Signals and Systems
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Kalman Filter: Bayes Interpretation
Tracking Many slides adapted from Kristen Grauman, Deva Ramanan.
Nome Sobrenome. Time time time time time time..
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 19: Tracking C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 19: Tracking

Tracking scenarios Follow a point Follow a template Follow a changing template Follow all the elements of a moving person, fit a model to it.

Things to consider in tracking What are the dynamics of the thing being tracked? How is it observed?

Three main issues in tracking

Simplifying Assumptions

Kalman filter graphical model and corresponding factorized joint probability x1 x2 x3 y1 y2 y3

Tracking as induction Make a measurement starting in the 0th frame Then: assume you have an estimate at the ith frame, after the measurement step. Show that you can do prediction for the i+1th frame, and measurement for the i+1th frame.

Base case

Prediction step given

Update step given

The Kalman Filter Key ideas: Linear models interact uniquely well with Gaussian noise - make the prior Gaussian, everything else Gaussian and the calculations are easy Gaussians are really easy to represent --- once you know the mean and covariance, you’re done

Recall the three main issues in tracking (Ignore data association for now)

The Kalman Filter [figure from http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html]

The Kalman Filter in 1D Dynamic Model Notation Predicted mean Corrected mean

The Kalman Filter

Prediction for 1D Kalman filter The new state is obtained by multiplying old state by known constant adding zero-mean noise Therefore, predicted mean for new state is constant times mean for old state Old variance is normal random variable variance is multiplied by square of constant and variance of noise is added.

The Kalman Filter

Measurement update for 1D Kalman filter Notice: if measurement noise is small, we rely mainly on the measurement, if it’s large, mainly on the prediction s does not depend on y

Kalman filter for computing an on-line average What Kalman filter parameters and initial conditions should we pick so that the optimal estimate for x at each iteration is just the average of all the observations seen so far?

Kalman filter model Initial conditions Iteration 0 1 2 1

What happens if the x dynamics are given a non-zero variance?

Kalman filter model Initial conditions Iteration 0 1 2 1

Linear dynamic models A linear dynamic model has the form This is much, much more general than it looks, and extremely powerful

Examples of linear state space models Drifting points assume that the new position of the point is the old one, plus noise D = Identity cic.nist.gov/lipman/sciviz/images/random3.gif http://www.grunch.net/synergetics/images/random3.jpg

Constant velocity We have Stack (u, v) into a single state vector (the Greek letters denote noise terms) Stack (u, v) into a single state vector which is the form we had above Here u is position, v velocity xi Di-1 xi-1

Constant Velocity Model velocity position position time measurement,position Constant Velocity Model These are lifts from figure 17.1. The caption for that figure explains them time

Constant acceleration We have (the Greek letters denote noise terms) Stack (u, v) into a single state vector which is the form we had above Same notation as the last slide xi Di-1 xi-1

Constant Acceleration Model velocity position position time These are figure 17.2 - see caption for that figure Constant Acceleration Model

Periodic motion Assume we have a point, moving on a line with a periodic movement defined with a differential eq: can be defined as with state defined as stacked position and velocity u=(p, v)

Periodic motion Take discrete approximation….(e.g., forward Euler integration with Dt stepsize.) xi-1 xi Di-1

n-D Generalization to n-D is straightforward but more complex.

n-D Generalization to n-D is straightforward but more complex.

n-D Prediction Generalization to n-D is straightforward but more complex. Prediction: Multiply estimate at prior time with forward model: Propagate covariance through model and add new noise:

n-D Correction Generalization to n-D is straightforward but more complex. Correction: Update a priori estimate with measurement to form a posteriori

n-D correction Find linear filter on innovations which minimizes a posteriori error covariance: K is the Kalman Gain matrix. A solution is

Kalman Gain Matrix As measurement becomes more reliable, K weights residual more heavily, As prior covariance approaches 0, measurements are ignored:

Constant Velocity Model position These are lifts from figure 17.1. The caption for that figure explains them position time Constant Velocity Model

position These are lifts from figure 17.1. The caption for that figure explains them time

This is figure 17. 3 of Forsyth and Ponce This is figure 17.3 of Forsyth and Ponce. The notation is a bit involved, but is logical. We plot the true state as open circles, measurements as x’s, predicted means as *’s with three standard deviation bars, corrected means as +’s with three standard deviation bars. position This is figure 17.3 of Forsyth and Ponce. The notation is a bit involved, but is logical. We plot state as open circles, measurements as x’s, predicted means as *’s with three standard deviation bars, corrected means as +’s with three standard deviation bars. time

The o-s give state, x-s measurement. position This is figure 17.3. The notation is a bit involved, but is logical. We plot state as open circles, measurements as x’s, predicted means as *’s with three standard deviation bars, corrected means as +’s with three standard deviation bars. time The o-s give state, x-s measurement.

Smoothing Idea We don’t have the best estimate of state - what about the future? Run two filters, one moving forward, the other backward in time. Now combine state estimates The crucial point here is that we can obtain a smoothed estimate by viewing the backward filter’s prediction as yet another measurement for the forward filter

The o-s give state, x-s measurement. time Forward estimates. position Forward filter, using notation as above; figure is top left of 17.5. The error bars are 1 standard deviation, not 3 (sorry!) The o-s give state, x-s measurement. time

The o-s give state, x-s measurement. time Backward estimates. position Backward filter, top right of 17.5, The error bars are 1 standard deviation, not 3 (sorry!) The o-s give state, x-s measurement. time

Combined forward-backward estimates. position Smoothed estimate, bottom of 17.5. Notice how good the state estimates are, despite the very lively noise. The error bars are 1 standard deviation, not 3 (sorry!) The o-s give state, x-s measurement. time

2-D constant velocity example from Kevin Murphy’s Matlab toolbox [figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

2-D constant velocity example from Kevin Murphy’s Matlab toolbox MSE of filtered estimate is 4.9; of smoothed estimate. 3.2. Not only is the smoothed estimate better, but we know that it is better, as illustrated by the smaller uncertainty ellipses Note how the smoothed ellipses are larger at the ends, because these points have seen less data. Also, note how rapidly the filtered ellipses reach their steady-state (“Ricatti”) values. [figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

Resources Kalman filter homepage http://www.cs.unc.edu/~welch/kalman/ (kalman filter demo applet) Kevin Murphy’s Matlab toolbox: http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html

Embellishments for tracking Richer models of P(xn|xn-1) Richer models of P(yn|xn) Richer model of probability distributions

Abrupt changes What if environment is sometimes unpredictable? Do people move with constant velocity? Test several models of assumed dynamics, use the best.

Multiple model filters Test several models of assumed dynamics [figure from Welsh and Bishop 2001]

MM estimate Two models: Position (P), Position+Velocity (PV) [figure from Welsh and Bishop 2001]

P likelihood [figure from Welsh and Bishop 2001]

No lag [figure from Welsh and Bishop 2001]

Smooth when still [figure from Welsh and Bishop 2001]

Embellishments for tracking Richer models of P(yn|xn) Richer model of probability distributions

Jepson, Fleet, and El-Maraghi tracker

Wandering, Stable, and Lost appearance model Introduce 3 competing models to explain the appearance of the tracked region: A stable model—Gaussian with some mean and covariance. A 2-frame motion tracker appearance model, to rebuild the stable model when it gets lost An outlier model—uniform probability over all appearances. Use an on-line EM algorithm to fit the (changing) model parameters to the recent appearance data.

Jepson, Fleet, and El-Maraghi tracker for toy 1-pixel image model Red line: observations Blue line: true appearance state Black line: mean of the stable process

Mixing probabilities for Stable (black), Wandering (red) and Lost (green) processes

Non-toy image representation Phase of a steerable quadrature pair (G2, H2). Steered to 4 different orientations, at 2 scales.

The motion tracker Motion prior, , prefers slow velocities and small accelerations. The WSL appearance model gives a likelihood for each possible new position, orientation, and scale of the tracked region. They combine that with the motion prior to find the most probable position, orientation, and scale of the tracked region in the next frame. Gives state-of-the-art tracking results.

Jepson, Fleet, and El-Maraghi tracker

Add fleet&jepson tracking slides Jepson, Fleet, and El-Maraghi tracker Add fleet&jepson tracking slides Far right column: the stable component’s mixing probability. Note its behavior at occlusions.

Show videos Play freeman#results.mpg

Embellishments for tracking Richer model of P(yn|xn) Richer model of probability distributions Particle filter models, applied to tracking humans

(KF) Distribution propagation prediction from previous time frame Noise added to that prediction Make new measurement at next time frame [Isard 1998]

Distribution propagation [Isard 1998]

Representing non-linear Distributions

Representing non-linear Distributions Unimodal parametric models fail to capture real-world densities…

Discretize by evenly sampling over the entire state space Tractable for 1-d problems like stereo, but not for high-dimensional problems.

Representing Distributions using Weighted Samples Rather than a parametric form, use a set of samples to represent a density:

Representing Distributions using Weighted Samples Rather than a parametric form, use a set of samples to represent a density: Sample positions Probability mass at each sample This gives us two knobs to adjust when representing a probability density by samples: the locations of the samples, and the probability weight on each sample.

Representing distributions using weighted samples, another picture [Isard 1998]

Sampled representation of a probability distribution You can also think of this as a sum of dirac delta functions, each of weight w:

Tracking, in particle filter representation x1 x2 x3 y1 y2 y3 Prediction step Update step

Particle filter Let’s apply this sampled probability density machinery to generalize the Kalman filtering framework. More general probability density representation than uni-modal Gaussian. Allows for general state dynamics, f(x) + noise

Sampled Prediction = ? ~= Drop elements to marginalize to get

Sampled Correction (Bayes rule) Prior  posterior Reweight every sample with the likelihood of the observations, given that sample: yielding a set of samples describing the probability distribution after the correction (update) step:

Naïve PF Tracking Start with samples from something simple (Gaussian) Repeat Take each particle from the prediction step and modify the old weight by multiplying by the new likelihood Correct Predict s Run every particle through the dynamics function and add noise. But doesn’t work that well because of sample impoverishment…

Sample impoverishment 10 of the 100 particles, along with the true Kalman filter track, with variance: time

Resample the prior In a sampled density representation, the frequency of samples can be traded off against weight: These new samples are a representation of the same density. I.e., make N draws with replacement from the original set of samples, using the weights as the probability of drawing a sample. s.t. …

Resampling concentrates samples

A practical particle filter with resampling

Pictorial view [Isard 1998]

Animation of condensation algorithm [Isard 1998]

Applications Tracking hands bodies Leaves What might we expect? Reliable, robust, slow

Contour tracking [Isard 1998]

Picture of the states represented by the top weighted particles Head tracking Picture of the states represented by the top weighted particles The mean state [Isard 1998]

Leaf tracking [Isard 1998]

Hand tracking [Isard 1998]

Go to hedvig slides

Desired operations with a probability density Expected value Marginalization Bayes rule

Computing an expectation using sampled representation

Marginalizing a sampled density If we have a sampled representation of a joint density and we wish to marginalize over one variable: we can simply ignore the corresponding components of the samples (!):

Marginalizing a sampled density

Sampled Bayes rule k p(V=v0|U)p(U)