E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.

Slides:



Advertisements
Similar presentations
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Advertisements

E.G.M. PetrakisImage Segmentation1 Segmentation is the process of partitioning an image into regions –region: group of connected pixels with similar properties.
Dynamic Occlusion Analysis in Optical Flow Fields
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Optical Flow Methods 2007/8/9.
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
Object Detection and Tracking Mike Knowles 11 th January 2005
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Motion Computing in Image Analysis
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Motion and Stereo EE4H, M.Sc Computer Vision Dr. Mike Spann
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Edge Linking & Boundary Detection
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Chapter 10 Image Segmentation.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Motion Detection And Analysis
Dynamical Statistical Shape Priors for Level Set Based Tracking
Common Classification Tasks
Range Imaging Through Triangulation
Presented by: Cindy Yan EE6358 Computer Vision
Announcements more panorama slots available now
Image and Video Processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Announcements more panorama slots available now
CSSE463: Image Recognition Day 30
Optical flow and keypoint tracking
Detecting and analysing motion
Presentation transcript:

E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints Input: sequence of image frames –Frame: Image at a particular instant of time –Differences between frames: due to motion of camera or object, illumination changes, changes of objects Output: detect changes, compute motion of camera or object, recognize moving objects etc.

E.G.M. PetrakisDynamic Vision2 There are four possibilities: –Stationary Camera, Stationary Objects (SCSO) –Stationary Camera, Moving Objects (SCMO) –Moving Camera, Stationary Objects (MCSO) –Moving Camera, Moving Objects (MCMO) Different techniques for each case –SCSO is simply static scene analysis: simplest –MCSO most general and complex case –MCSO, MCMO in navigation applications Dynamic scene analysis  more info  can be easier than static scene analysis

E.G.M. PetrakisDynamic Vision3 Frame sequence: F(x,y,t) –Intensity of pixel x, y at time t –Assume that t represents the t-th frame –The image is acquired by camera at the origin of the 3-D coordinate system Detect changes in F(x,y,t) between successive frames –At pixel, edge, region level –Aggregate changes to obtain useful info (e.g., trajectories)

E.G.M. PetrakisDynamic Vision4 Difference Pictures: Compare the pixels of two frames –Where τ is a user defined threshold –Pixels with value 1 result from motion or illumination changes –Assumes that the frames are properly registered –Thresholding is important: slow moving objects may not be detected for a given τ

E.G.M. PetrakisDynamic Vision5 (a), (b): frames from a sequence with change in illumination (c ): their difference thresholded with τ=25 (a), (b): frames from a sequence with moved objects (c ): their difference thresholded with τ=25

E.G.M. PetrakisDynamic Vision6 Size filtering: only pixels that belong to a 4 or 8 connected component with intensity larger than τ are retained –Result of size filtering with τ = 10 –Removes mainly noisy regions

E.G.M. PetrakisDynamic Vision7 Robust change detection: intensity characteristics of regions are compared using a statistical criterion –Super-pixels: n x m non-overlapping rectangles –Local mask: groups of pixels of local pixel area

E.G.M. PetrakisDynamic Vision8 (a)Super-pixels (b)Mask of local pixel area Robust change detection with (a)Super-pixels (super-pixel Resolution) (b)Pixel masks

E.G.M. PetrakisDynamic Vision9 Accumulative difference pictures: analyze changes over a sequence of frames –Compare every frame with a reference frame –Increase a difference term by 1 whenever the difference is greater than the threshold –Detects even small or slowly moving objects –Eliminates small misregistration between frames

E.G.M. PetrakisDynamic Vision10 Change detection using accumulative differences (a),(b): first and last frames (c): accumulative difference picture

E.G.M. PetrakisDynamic Vision11 Segmentation using motion: find objects in SCMO and MCMO scenes –SCMO: separate moving objects from stationary background –MCMO: remove the motion due to camera –Correspondence problem: the process of identifying the same object of feature in two or more frames –Large number of features  put restrictions on the number of possible matches –Regions, corners, edges …

E.G.M. PetrakisDynamic Vision12 1.Temporal and Spatial Gradients: Compute –dF/ds: spatial gradient –dF/dt: temporal gradient –Apply threshold to the product –Responds even to slow moving edges

E.G.M. PetrakisDynamic Vision13 (a),(b): two frames of a sequence (c): edges detected using spatial-temporal gradients

E.G.M. PetrakisDynamic Vision14 2.Using difference pictures (stationary camera): difference and accumulative difference pictures find moving areas

E.G.M. PetrakisDynamic Vision15 The area in PADP, NADP is the area covered by the moving object in the reference frame –PADP, NADP continue to increase in value but the regions stops growing in size –Use a mask of the object to determine whether a region is growing –Masks can be obtained from AADP when the object has been completely displaced –In cases of occlusion monitor changes in regions

E.G.M. PetrakisDynamic Vision16 (a)-(c): frames 1,5,7, containing a moving object (d),(e),(f): PADP, NADP, AADP

E.G.M. PetrakisDynamic Vision17 Motion correspondence: to determine the motion of objects establish a correspondence between features in two frames –Correspondence problem: pair a point p i =(x i,y i ) in the first image with a point p j =(x j,y j ) in the second image –Disparity: d ij =(x i - x j,y i - y j ) –Compute disparity using relaxation labeling –Questions: how are points selected for matching? how are the correct matches chosen? what constraints?

E.G.M. PetrakisDynamic Vision18 Three properties guide matching: –Discreteness: minimize expensive searching (detect points at which intensity values are varying quickly in at least one direction) –Similarity: match similar features (e.g., corners, edges, etc.) –Consistency: match nearby points (a point cannot move everywhere) correspondence problem as bipartite graph matching between two frames A, B: remove all but one connection for each point

E.G.M. PetrakisDynamic Vision19 Disparity computation using Relaxation Labeling: –Identify the features to be matched –E.g., corners or (generally) points i, j –Let P ij 0 be the initial probability –Disparity: d ij =(x i - x j,y i - y j ) < D max points cannot move everywhere high probabilities for points in d x with similar motion neighborhood

E.G.M. PetrakisDynamic Vision20 Update P ij at every iteration: For every point i, the algorithm computes {i, (dx ij,P ij ) 0, (dx ij,P ij ) 1,…. (dx ij,P ij ) n } –n: n-th iteration or frame Use correspondences with high P ij Use these for the next two frames etc.

E.G.M. PetrakisDynamic Vision21 n Disparities after 3 iterations From Ballard and Brown, 94

E.G.M. PetrakisDynamic Vision22 disparities after Applying a relaxation labeling algorithm

E.G.M. PetrakisDynamic Vision23 Image flow: velocity field in the image due to motion of the observer, objects or both –Velocity vector of each pixel –Image flow is computed at each pixel –SCMO: most pixels will have zero velocity Methods: pixel and feature based methods compute image flow for all pixels or for specific features (e.g., corners) –Relaxation labeling methods –Gradient Based methods

E.G.M. PetrakisDynamic Vision24 Gradient Based Methods: exploit the relationship between spatial and temporal gradients of intensity –Assumption: continuous and smooth changes of intensity in consecutive frames Taylor expansion

E.G.M. PetrakisDynamic Vision25 Better estimation: the error term is not zero –It has to be minimum: Apply Lagrange multipliers method –f x, f y,f t : derivatives of F with regard to x,y and t –The derivative of F 2 with regard to x,y is zero –From Ballard and Brown 1984  –The computation can be unreliable at motion boundaries (e.g., occluded boundaries)

E.G.M. PetrakisDynamic Vision26 Turn this into an iterative method for solving u x,u y 

E.G.M. PetrakisDynamic Vision27 Optical flow computation for two consecutive frames (Horn & Schunck 1980) –k=0 –Initialize all u x k =u y k =0 –Repeat until some error criterion in satisfied

E.G.M. PetrakisDynamic Vision28 Multi-frame optical flow: compute optical flow for two frames and use this to initialize optical flow for the next frame etc. –k=0 –Initialize all u x,u y (apply the previous algorithm) –Repeat until some error criterion in satisfied

E.G.M. PetrakisDynamic Vision29 (a),(b),(c): three frames from a rotating sphere (d): optical flow after 32 frames from Ballard and Brown’84

E.G.M. PetrakisDynamic Vision30 Information in optical flow: assuming high quality computation of optical flow –Areas of smooth velocity  single surfaces –Areas with large gradients  occlusion and boundaries –The translation component of motion is directed toward the Focus Of Expansion (FOE): intersection of the directions of optical flow as seen by a moving observer –Surface structure can be derived from the derivatives of the translation component –Angular velocity can be determined from the rotational component

E.G.M. PetrakisDynamic Vision31 The velocity vectors of the stationary components of a scene as seen by a translating observer meet at the Focus Of Expansion (FOE)

E.G.M. PetrakisDynamic Vision32 Tracking: a feature or an object must be tracked over a sequence of frames –Easy to solve for single entities –For many entities moving independently requires constraints –Path coherence: the motion of an object in a frame sequence doesn't change abruptly between frames (assumes high sampling rate) The location of a point will be relatively unchanged The scalar velocity of a point will be relatively unchanged The direction of motion will be relatively unchanged

E.G.M. PetrakisDynamic Vision33 Deviation function for path coherence: –A trajectory of a point i: T i = –P i k : point i at k-th frame –In vector form: X i = –Deviation of the point in the k-th frame: d i k =φ(X ik-1 X ik,X ik X ik+1 ) –Deviation for the complete trajectory: –For m points (trajectories) in a sequence of n frames the total deviation is –Minimize D to find the correct trajectories

E.G.M. PetrakisDynamic Vision34 The trajectories of 2 points: The points in the 1 st, 2nd and 3rd frame are labeled by squares, triangles and rhombus respectively The change in the direction and velocity must be smooth

E.G.M. PetrakisDynamic Vision35 D is a function of φ, how φ is computed? –It is described by the function – φ can also be written as w 1,w 2 : weight terms

E.G.M. PetrakisDynamic Vision36 Direction coherence: the first term –Dot product of displacement vectors Speed coherence: the second term –Geometric/arithmetic mean of magnitude Limitations: same number of features in every frame –Objects may disappear, appear or occlude –Changes of geometry, illumination –Lead to false correspondences –Force the trajectories to satisfy certain local constraints