Flexible templates, density estimation, mean shift

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Distinctive Image Features from Scale-Invariant Keypoints
CSCE643: Computer Vision Mean-Shift Object Tracking Jinxiang Chai Many slides from Yaron Ukrainitz & Bernard Sarel & Robert Collins.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer Vision Lecture 16: Region Representation
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Lecture 5 Template matching
Mean Shift A Robust Approach to Feature Space Analysis Kalyan Sunkavalli 04/29/2008 ES251R.
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Computer Vision Laboratory 1 B-Spline Channels & Channel Smoothing Michael Felsberg Computer Vision Laboratory Linköping University SWEDEN.
Chamfer Matching & Hausdorff Distance Presented by Ankur Datta Slides Courtesy Mark Bouts Arasanathan Thayananthan.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Modern Navigation Thomas Herring
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
Computer Vision James Hays, Brown
CS 433/557 Algorithms for Image Analysis
Mean-shift and its application for object tracking
What we didn’t have time for CS664 Lecture 26 Thursday 12/02/04 Some slides c/o Dan Huttenlocher, Stefano Soatto, Sebastian Thrun.
1 LES of Turbulent Flows: Lecture 1 Supplement (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Fall 2014.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Object Detection with Discriminatively Trained Part Based Models
CS 4487/6587 Algorithms for Image Analysis
Efficient Matching of Pictorial Structures By Pedro Felzenszwalb and Daniel Huttenlocher Presented by John Winn.
CS654: Digital Image Analysis Lecture 30: Clustering based Segmentation Slides are adapted from:
Sampling for Part Based Object Models Daniel Huttenlocher September, 2006.
Pictorial Structures and Distance Transforms Computer Vision CS 543 / ECE 549 University of Illinois Ian Endres 03/31/11.
Fitting image transformations Prof. Noah Snavely CS1114
Image Segmentation Shengnan Wang
Mean Shift ; Theory and Applications Presented by: Reza Hemati دی 89 December گروه بینایی ماشین و پردازش تصویر Machine Vision and Image Processing.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Chapter 20 Statistical Considerations Lecture Slides The McGraw-Hill Companies © 2012.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Dynamic Programming (DP), Shortest Paths (SP)
Tracking Hands with Distance Transforms Dave Bargeron Noah Snavely.
(Unit 5) Formulas and Definitions:. Arithmetic Sequence. A sequence of numbers in which the difference between any two consecutive terms is the same.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
776 Computer Vision Jan-Michael Frahm Spring 2012.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
SIFT Scale-Invariant Feature Transform David Lowe
CENG 789 – Digital Geometry Processing 08- Rigid-Body Alignment
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Cumulative distribution functions and expected values
Ch8: Nonparametric Methods
LOCUS: Learning Object Classes with Unsupervised Segmentation
Nonparametric Semantic Segmentation
Paper Presentation: Shape and Matching
Object Recognition in the Dynamic Link Architecture
Presented by: Cindy Yan EE6358 Computer Vision
ECE 692 – Advanced Topics in Computer Vision
Brief Review of Recognition + Context
Comparing and Contrasting Functions
Pattern Recognition and Image Analysis
2D transformations (a.k.a. warping)
Kiera Henning, Jiwon Kim,
Announcements Questions on the project? New turn-in info online
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Introduction to Sensor Interpretation
8. One Function of Two Random Variables
CS723 - Probability and Stochastic Processes
Random Variate Generation
Chapter 2. Random Variables
Introduction to Sensor Interpretation
Calibration and homographies
8. One Function of Two Random Variables
Presentation transcript:

Flexible templates, density estimation, mean shift CS664 Lecture 23 Tuesday 11/16/04 Some slides care of Dan Huttenlocher, Dorin Comaniciu

Administrivia Quiz 4 on Thursday Coverage through last week Quiz 5 on last Tuesday of classes (11/30) 1-page paper writeup due on 11/30 Hand in via CMS List the paper’s assumptions, possible applications. What is good and bad about it?

DT as lower envelope  1 2

Robust Hausdorff distance Standard distance not robust Can use fraction or distance

DT based recognition Better than explicit search for correspondence Features can merge or split, leading to a computational nightmare Outlier tolerance is important Should take advantage of the spatial coherence of unmatched points

Flexible models Classical recognition doesn’t handle non-rigid motions at all well Pictorial structures Parts + springs Appearance models

Model definition Parts are V={v1,…,vn}, graph vertices Configuration L=(l1, …, ln) Specifying locations of the parts Appearance parameters A=(a1, …, an) Model for each part Edge eij, (vi,vj)  E for connected parts Explicit dependency between part locations li, lj Connection parameters C={cij | eij  E} Spring parameters for each pair of connected parts

Flexible Template Algorithms Difficulty depends on structure of graph General case exponential time Consider special case in which parts translate with respect to common origin E.g., useful for faces Parts V= {v1, … vn} Distinguished central part v1 Spring ci1 connecting vi to v1 Quadratic cost for spring

Central Part Energy Function Location L=(l1, …, ln) specifies where each part positioned in image Best location minL (i mi(li) + di(li,l1)) Part cost mi(li) Measures degree of mismatch of appearance ai when part vi placed at location li Deformation cost di(li,l1) Spring cost ci1 of part vi measured with respect to central part v1 Note deformation cost zero for part v1 (wrt self)

Consider Case of 2 Parts minl1,l2 (m1(l1) + m2(l2)+l2–T2(l1)2) Here, T2(l1) transforms l1 to ideal location wrt l2 (offset) minl1 (m1(l1) + minl2 (m2(l2)+l2–T2(l1)2)) But minx (f(x) + x–y2) is a distance transform minl1 (m1(l1) + Dm2(T2(l1))

Rest position of the part is 2 to the right of the root Intuition spring cost Never chosen! cost location 1 2 3 4 5 6 7 8 root root part Rest position of the part is 2 to the right of the root

Lower envelope tells you for any root loc. the best part loc! Cost of part loc. 7 as function of root loc. Cost of part loc. 6 as function of root loc. cost location 1 2 3 4 5 6 7 8 root part

Application to Face Detection Five parts: eyes, tip of nose, sides of mouth Each part a local image patch Represented as response to oriented filters 27 filters at 3 scales and 9 orientations Learn coefficients from labeled examples Parts translate with respect to central part, tip of nose

Face Detection Results Runs at several frames per second Compute oriented filters at 27 orientations and scales for part cost mi Distance transform mi for each part other than central one (nose tip) Find maximum of sum for detected location

General trees via DP Recursion in terms of children Cj of vj Want to minimize V mj(lj)+E dij(li,lj) over (V,E) Can express this as a function Bj(li) Cost of best location of vj given location li of vi Recursion in terms of children Cj of vj Bj(li) = minlj( mj(lj) + dij(li,lj) + Cj BC(lj) ) For leaf node no children, so last term empty For root node no parent, so second term empty

Further optimization via DT This recurrence can be solved in time O(ns2) for s locations, n parts Still not practical since s is in the millions Couple with distance transform method for finding best pair-wise locations in linear time Resulting method is O(ns)!

Example: Finding People Ten part 2D model Rectangular regions for each part Translation, rotation, scaling of parts Configurations may allow parts to overlap Image Data Likely Configurations Best Match

More examples

Probability: outcomes, events Outcome: finest-grained description of situation Event: set of outcomes Probability: measure of (relative) event size Conditional probability relative to containing event written Pr(E|E’) H,T,H Event E T,T,H

Random variables Function from events to a domain (Z or R) H,T,H Defined by a Cumulative Distribution Function (CDF) Derivative of distribution is the density (PDF) H,T,H T,H,H Event X=2

Discrete vs continuous RV’s Discrete case: Easier to think about density (called PMF) For each value of the RV, a non-negative number Summing to 1 Continuous case: Often need to think about distribution PDF (density) can be larger than 1 Integral over a range is most often useful

Density estimation Given some data drawn from an unknown distribution, how can you compute the density? If you know it is Gaussian, there are only 2 parameters: mean, variance You can compute the mean and variance of the data! These are provable good estimates

Density estimation Parametric methods Non-parametric methods ML estimators (Gaussians) Robust estimators Non-parametric methods Kernel estimators

Mean shift Non-parametric method to compute the nearest mode of a distribution Hill-climbing in terms of density

Applications of mean shift Density modes are useful! Segmentation Color, motion, etc. Tracking Edge-preserving smoothness See papers for more examples

Image and histogram

Example

Segmentation example