Tracking Features with Large Motion. Abstract Problem: When frame-to-frame motion is too large, KLT feature tracker does not work. Solution: Estimate.

Slides:



Advertisements
Similar presentations
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Advertisements

Motion.
The fundamental matrix F
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Nick Hirsch: Progress Report. Track Features (SIFT) Determine Direction of Tracked Points Compare Tracks to FoE Direction Field Update FoE Cluster (Mean-Shift)
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Snakes - Active Contour Lecturer: Hagit Hel-Or
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Now, we use two methods to do the morphological dilation to image A with B: A: B: + +: the origin.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Segmentation Divide the image into segments. Each segment:
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
KLT tracker & triangulation Class 6 Read Shi and Tomasi’s paper on good features to track
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Automatic Camera Calibration
CS448f: Image Processing For Photography and Vision Wavelets Continued.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) /04 $20.00 c 2004 IEEE 1 Li Hong.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
CS 376b Introduction to Computer Vision 04 / 02 / 2008 Instructor: Michael Eckmann.
CSCE 643 Computer Vision: Structure from Motion
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Tracking CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Projects Project 1a due this Friday Project 1b will go out on Friday to be done in pairs start looking for a partner now.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Another Example: Circle Detection
Motion and Optical Flow
ISOMAP TRACKING WITH PARTICLE FILTERING
Vehicle Segmentation and Tracking in the Presence of Occlusions
Presented by: Cindy Yan EE6358 Computer Vision
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Motion Estimation Today’s Readings
Finding Functionally Significant Structural Motifs in Proteins
What I learned in the first 2 weeks
Announcements more panorama slots available now
Image and Video Processing
Announcements Questions on the project? New turn-in info online
Image Segmentation.
Announcements more panorama slots available now
Presentation transcript:

Tracking Features with Large Motion

Abstract Problem: When frame-to-frame motion is too large, KLT feature tracker does not work. Solution: Estimate the motion at the deepest pyramid level by matching the characteristic curves of the consecutive images.

Introduction Feature tracking is an important issue in many computer vision applications. In order to allow the tracker to handle large motion, people usually use a pyramidal implementation of the KLT feature tracker.

We propose a method to extend the pyramidal KLT feature tracker to deal with image I and J are taken from widely different viewpoint. I and J are two consecutive images in an image sequence.

Sum of square differences (SSD) Given a feature point on I, the goal is to find the correspondence on J. Where I(x) and J(x) are intensity with image point. d: displacement vector W: a small integration window centered at the feature point.

KLT feature tracker Let, we can use the linear system to find d:

Automatically selecting good features is also an important issue. Let if,then the image point is a good feature.

Pyramidal implementation of KLT feature tracker Let be original image of I and J. : downsampling of : downsampling of… : height of the image pyramid. Similarly, we can obtain images for image J. After constructing the image pyramid of image I and J, we apply the pyramidal KLT feature tracker.

First step: : the displacement vector at the deepest level. Second step: s: downsampling factor. = + repeat second step until estimate.

Accommodating very large motion There are two problems: In practice, is a small number; otherwise the image size of will be too small to carry enough details for each feature. where : d* : true displacement vector for a feature point

The feature point dissolve when the height of the image pyramid is too large.

For those cases, our method provides a solution by computing the motion estimates and at the deepest pyramid level. The effect of computing and is to provide a coarse motion at deepest level which makes the residual motion small enough to satisfy the assumption.

Characteristic curves Define as the characteristic curve for x-axis computed from image I. for y-axis from image I.

After the four curveshave been computed, we compute by matching the two characteristic curves. can be computed in the same manner.

Motion estimation at the deepest pyramid level The goal of the motion estimator is to find the best labeling that assigns a label to each element.

: domain of the curve ( ) : displacement to element ( ) : when element is considered to be occluded. : the ordered set of element where The penalty is use to penalize the situations where discontinuity is occurred. The penalty serves as a threshold that affects whether the motion estimator should assign a label to the element.

After finding the optimal labeling, the motion estimator computes by: For those element in, we compute their motion estimates by interpolating the displacements of the elements in the neighborhood.

Feature tracking with pre-checking Consider a feature point. x is a lost feature when one of the following conditions is satisfied: Therefore, no computational power is wasted.

Results and comparisons Two image sequences are tested here. Show : 71-frames 320 x 240 pixels Building : 73-frames 480 x 320 pixels

Birchfield’s implementation

Comparison between the number of the tracked features

Comparison between the running time