Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Real-Time Template Tracking
Active Appearance Models
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
DSP Based Motion Tracking System Dani Cherkassky Ronen Globinski Advisor: Mr Slapak Alon.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
KLT tracker & triangulation Class 6 Read Shi and Tomasi’s paper on good features to track
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
EM and expected complete log-likelihood Mixture of Experts
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
CSE 185 Introduction to Computer Vision Feature Tracking and Optical Flow.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Presented by: Idan Aharoni
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
Motion and Optical Flow
Particle Filtering for Geometric Active Contours
Motion Detection And Analysis
LOCUS: Learning Object Classes with Unsupervised Segmentation
Presented by Omer Shakil
Dynamical Statistical Shape Priors for Level Set Based Tracking
Vehicle Segmentation and Tracking from a Low-Angle Off-Axis Camera
Representing Moving Images with Layers
Representing Moving Images with Layers
Motion Estimation Today’s Readings
What I learned in the first 2 weeks
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Announcements more panorama slots available now
SIFT.
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
CSSE463: Image Recognition Day 30
Announcements more panorama slots available now
CSSE463: Image Recognition Day 30
Presentation transcript:

Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani

Overview Given a video sequence, the purpose is to track the objects in the video and overcome occlusions. Camera can be stationary or moving. Movement between frames: ▫Translations: movement in the x-y axis. ▫Affine transformations: Rotations, scaling.

Overview Project goal: study and understand 2 algorithms for tracking ▫KLT- Feature Tracking ▫GLOMO-Object Learning

Kanade Lucas Tomasi-KLT Basic feature tracking algorithm. Good feature: Consider small rectangular windows all over the image, Good feature is a window that can be tracked easily in a sequence of images. Feature movement :

Kanade Lucas Tomasi-KLT Tracking Goal: Find and parameters that Minimize the dissimilarity between current and next image in the sequence. To Find minimum, we set first derivative of dissimilarity to zero. Taylor extension for next Image, assuming small movements: -Gradients vector

Kanade Lucas Tomasi-KLT We receive the equation to solve: Consists of gradient and pixels location Movement parameters Error vector

Kanade Lucas Tomasi-KLT Solution is iterative: Initialization: Iteration step: continue until convergence Calculate T and a matrixes Receive parameters from z Update

KLT-Results

GLOMO- Greedy Unsupervised Learning Of Multiple Objects Tracking all objects in the sequence and the background Algorithm output (for one object): The object Object mask Background transformationObject Transformation Background Object Variance Background Variance

Given a sequence of frames, in each frame object goes through transformations and might be noisy Assuming there are J possible transformations. Tracking the object parameters using EM,an iterative procedure to compute the Maximum Likelihood estimate in the presence of missing or hidden data Object density: Tracking objects from images

EM Example:(for one object with static background) ▫Expectation: given current parameters find Giving weight to each transformation possible ▫Maximization: Update the parameters Tracking objects from images

Background,objects and all parameters are found together in EM iterations. For more than one object and moving background the complexity is too high. The probability model doesn’t handle object occlusion Tracking objects from images- problems A more efficient approach is needed!

First finding the background and then every object separately. New probability density model: Each pixel is part of the object/background or uniform for other. Tracking will be in relevant pixels only for speeding up the tracking. The Algorithm-GLOMO

Algorithm Steps: ▫User defines the number of objects to find and number of EM iterations. ▫Find the background and its transformations assuming all masks are zero. ▫Define vector Z which contains relevant pixels and initialize: ▫For each object, find object parameters and transformations by applying EM for the object. The Algorithm-GLOMO

▫After tracking the object, Update Vector Z with the object pixels. The Algorithm-GLOMO

GLOMO-Results Example: Reconstruction: Original OrderingNew Ordering

GLOMO-Results Example : moving background

GLOMO-Results Example :change number of frames 20 frames 60 frames

GLOMO-Results Example :change number of frames 110 frames

GLOMO-Results Example :change number of EM iterations for 20 frames 70 iterations 300 Iterations

Conclusions Both algorithms worked well on high quality pictures with large and defined objects. On low quality pictures with less defined objects, GLOMO didn’t recognize the objects very well and KLT lost all the features very quickly. Both Algorithms handled well with moving camera and changing background. KLT doesn’t recover from occlusions, while GLOMO handles them very well.

Conclusions GLOMO doesn’t work well on 20 frames but works well on more than 100 frames can’t work on real time systems. while KLT works on real time systems.

Kanade Lucas Tomasi-KLT Finding good features: For all the possible windows in the current image compute the Eigen values for the gradients matrix Z: Find the maximum and minimum Eigen values and Set threshold as the middle point between them For every window check that Z Eigen values maintains:

EM Example:(for one object with static background) ▫Expectation: find ▫Maximization: Update the parameters Tracking objects from images -The pixel is part of the object