Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.

Slides:



Advertisements
Similar presentations
Real-Time Template Tracking
Advertisements

The fundamental matrix F
Learning Trajectory Patterns by Clustering: Comparative Evaluation Group D.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Segmentation into Planar Patches for Recovery of Unmodeled Objects Kok-Lim Low COMP Computer Vision 4/26/2000.
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Probabilistic video stabilization using Kalman filtering and mosaicking.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Segmentation Divide the image into segments. Each segment:
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
Motion Detection in UAV Videos by Cooperative Optical Flow and Parametric Analysis Masaharu Kobashi.
Yuping Lin and Gérard Medioni.  Introduction  Method  Register UAV streams to a global reference image ▪ Consecutive UAV image registration ▪ UAV to.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
Multimodal Interaction Dr. Mike Spann
1 TEMPLATE MATCHING  The Goal: Given a set of reference patterns known as TEMPLATES, find to which one an unknown pattern matches best. That is, each.
ALIGNMENT OF 3D ARTICULATE SHAPES. Articulated registration Input: Two or more 3d point clouds (possibly with connectivity information) of an articulated.
The Measurement of Visual Motion P. Anandan Microsoft Research.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
1 Computational Vision CSCI 363, Fall 2012 Lecture 28 Structure from motion.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
1 Similarity-based matching for face authentication Christophe Rosenberger Luc Brun ICPR 2008.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
CSE 185 Introduction to Computer Vision Feature Matching.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Presented by: Idan Aharoni
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Multiple Target Tracking Using Spatio-Temporal Monte Carlo Markov Chain Data Association Qian Yu, Gerard Medioni, and Isaac Cohen Edwin Lei.
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Curve Simplification under the L 2 -Norm Ben Berg Advisor: Pankaj Agarwal Mentor: Swaminathan Sankararaman.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Image Mosaicing with Motion Segmentation from Video Augusto Roman, Taly Gilat EE392J Final Project 03/20/01.
Motion and Optical Flow
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Range Imaging Through Triangulation
Representing Moving Images with Layers
Presented by: Cindy Yan EE6358 Computer Vision
Representing Moving Images with Layers
Effective and Efficient Detection of Moving Targets From a UAV’s Camera
Motion Estimation Today’s Readings
Announcements more panorama slots available now
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
EE5900 Advanced Embedded System For Smart Infrastructure
CSE 185 Introduction to Computer Vision
Announcements more panorama slots available now
CSSE463: Image Recognition Day 30
Optical flow and keypoint tracking
Presentation transcript:

Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California

Their application sounds familiar. Video surveillance Sensors with pan-tilt and zoom Sensors mounted on moving airborne platforms Requirement for detection and tracking of moving objects and the relationship of their trajectories Requirement for high-level description of the whole video sequence

Approach First find optical flow and perform motion compensation to stabilize. Find large numbers of moving regions. Use the residual flow field and its normal component to detect errors. Define a attributed graph whose nodes are detected regions and edges are possible matches between two regions detected in two different frames. Use the graph as a dynamic template for tracking moving objects.

Their Idea Detection after stabilization doesn’t work well. So integrate the detection into the stabilization algorithm by locating regions of the image where a residual motion occurs using the normal component of the optical flow field.

What is “Normal Flow”? The optical flow equation constrains the image velocity in the direction of the local image gradient, but not the tangential velocity. Normal flow corresponds to the image velocity along the image gradient. It is computed from both image gradients and temporal gradients of the stabilized sequence. The amplitude is large near moving regions. The amplitude is near zero near stationary regions.

Computation of Normal Flow Let  ij denote the warping of image to reference frame j. The stabilized image sequence is defined by I i (  ij ). Given reference image I 0 and target I 1, image stabilization consists of registering the two images and computing the geometric transform  that warps I 1 so it aligns with I 0.

Parameter Estimation of the Geometric Transform Minimize the least squares equation Detect and remove outliers through an iterative process.

Definition of Normal Flow The warping function is integrated into the formula so that the image gradients are computed on the original image grids and not the warped ones. This simplifies the computation and allows for a more accurate estimation of the residual normal flow.

Finding Moving Objects Given a pair of image frames, find the moving objects by thresholding the normal flow. Does this overcome problems of other approaches?

Graph Representation of Moving Objects Nodes are the moving regions. Edges are the relationships between 2 moving regions detected in 2 separate frames. Each new frame generates a set of regions corresponding to the moving objects. We want to know which moving objects in the new frame are the same as those in the old.

Why is this Difficult? Little information about the objects is known. Objects tend to be of small size in aerial imagery. Large changes in object size are possible.

Sample Detection

Detection Videos seq1.avi seq1_det.avi seq6.avi seq6_det.avi

Example Graph: What’s going on?

Attributes of a Node

Keeping Track of Moving Regions Among the detected regions, some small ones should be merged into a larger one. They cluster the detected regions in the graph, instead of using single images. Use a median shape template to keep track of the different moving regions.

Moving objects that don’t appear in some frames can be hypothesized. hypothesized

Extraction of Object Trajectories The graph representation changes as new frames are acquired and processed. The goal is to find the full trajectory of each moving object. But we don’t know where each one starts or where it ends. So we have to consider each node with no predecessor a possible start and each with no successor a possible end.

Optimal Path Assign each edge of the graph a cost, which is the similarity between the connected nodes. ij c ij = C ij / (1 + d ij 2 ) C ij is the gray-level and shape correlation between i and j. d ij is the distance between their centroids.

Optimal Path This formulation does not lead to the optimal solution. Instead they define the length l j of node j as the maximal length of the path starting at that node. Then the modified cost function C ij =l j c ij is used to find the optimal paths from each node without predecessor, using a greedy search method.

Quantitative Evaluation TP = true positives of moving objects FP = false positives of moving objects FN = false negatives (not detected) Metrics: DR = TP / (TP + FN) Detection Ratio FAR = FP / (TP + FP) False Alarm Ratio

Evaluation Results on 5 Shots

Tracking Demos seq1_mos_track.avi seq6_track.avi

Questions/Comments Have they solved our problem? Has anyone done better? No one seems to use the metrics. But a 2005 paper by Nicolescu and Medioni compares results of 4 methods on fake sequences (and beats them, of course) Another recent paper by Xiao and Shah says they compared their results to those of Ke and Kanade, Wang and Adelson, and Ayer and Sawhney, but no numbers are given. Can we beat these people?