Presentation is loading. Please wait.

Presentation is loading. Please wait.

E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.

Similar presentations


Presentation on theme: "E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints."— Presentation transcript:

1 E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints Input: sequence of image frames –Frame: Image at a particular instant of time –Differences between frames: due to motion of camera or object, illumination changes, changes of objects Output: detect changes, compute motion of camera or object, recognize moving objects etc.

2 E.G.M. PetrakisDynamic Vision2 There are four possibilities: –Stationary Camera, Stationary Objects (SCSO) –Stationary Camera, Moving Objects (SCMO) –Moving Camera, Stationary Objects (MCSO) –Moving Camera, Moving Objects (MCMO) Different techniques for each case –SCSO is simply static scene analysis: simplest –MCSO most general and complex case –MCSO, MCMO in navigation applications Dynamic scene analysis  more info  can be easier than static scene analysis

3 E.G.M. PetrakisDynamic Vision3 Frame sequence: F(x,y,t) –Intensity of pixel x, y at time t –Assume that t represents the t-th frame –The image is acquired by camera at the origin of the 3-D coordinate system Detect changes in F(x,y,t) between successive frames –At pixel, edge, region level –Aggregate changes to obtain useful info (e.g., trajectories)

4 E.G.M. PetrakisDynamic Vision4 Difference Pictures: Compare the pixels of two frames –Where τ is a user defined threshold –Pixels with value 1 result from motion or illumination changes –Assumes that the frames are properly registered –Thresholding is important: slow moving objects may not be detected for a given τ

5 E.G.M. PetrakisDynamic Vision5 (a), (b): frames from a sequence with change in illumination (c ): their difference thresholded with τ=25 (a), (b): frames from a sequence with moved objects (c ): their difference thresholded with τ=25

6 E.G.M. PetrakisDynamic Vision6 Size filtering: only pixels that belong to a 4 or 8 connected component with intensity larger than τ are retained –Result of size filtering with τ = 10 –Removes mainly noisy regions

7 E.G.M. PetrakisDynamic Vision7 Robust change detection: intensity characteristics of regions are compared using a statistical criterion –Super-pixels: n x m non-overlapping rectangles –Local mask: groups of pixels of local pixel area

8 E.G.M. PetrakisDynamic Vision8 (a)Super-pixels (b)Mask of local pixel area Robust change detection with (a)Super-pixels (super-pixel Resolution) (b)Pixel masks

9 E.G.M. PetrakisDynamic Vision9 Accumulative difference pictures: analyze changes over a sequence of frames –Compare every frame with a reference frame –Increase a difference term by 1 whenever the difference is greater than the threshold –Detects even small or slowly moving objects –Eliminates small misregistration between frames

10 E.G.M. PetrakisDynamic Vision10 Change detection using accumulative differences (a),(b): first and last frames (c): accumulative difference picture

11 E.G.M. PetrakisDynamic Vision11 Segmentation using motion: find objects in SCMO and MCMO scenes –SCMO: separate moving objects from stationary background –MCMO: remove the motion due to camera –Correspondence problem: the process of identifying the same object of feature in two or more frames –Large number of features  put restrictions on the number of possible matches –Regions, corners, edges …

12 E.G.M. PetrakisDynamic Vision12 1.Temporal and Spatial Gradients: Compute –dF/ds: spatial gradient –dF/dt: temporal gradient –Apply threshold to the product –Responds even to slow moving edges

13 E.G.M. PetrakisDynamic Vision13 (a),(b): two frames of a sequence (c): edges detected using spatial-temporal gradients

14 E.G.M. PetrakisDynamic Vision14 2.Using difference pictures (stationary camera): difference and accumulative difference pictures find moving areas

15 E.G.M. PetrakisDynamic Vision15 The area in PADP, NADP is the area covered by the moving object in the reference frame –PADP, NADP continue to increase in value but the regions stops growing in size –Use a mask of the object to determine whether a region is growing –Masks can be obtained from AADP when the object has been completely displaced –In cases of occlusion monitor changes in regions

16 E.G.M. PetrakisDynamic Vision16 (a)-(c): frames 1,5,7, containing a moving object (d),(e),(f): PADP, NADP, AADP

17 E.G.M. PetrakisDynamic Vision17 Motion correspondence: to determine the motion of objects establish a correspondence between features in two frames –Correspondence problem: pair a point p i =(x i,y i ) in the first image with a point p j =(x j,y j ) in the second image –Disparity: d ij =(x i - x j,y i - y j ) –Compute disparity using relaxation labeling –Questions: how are points selected for matching? how are the correct matches chosen? what constraints?

18 E.G.M. PetrakisDynamic Vision18 Three properties guide matching: –Discreteness: minimize expensive searching (detect points at which intensity values are varying quickly in at least one direction) –Similarity: match similar features (e.g., corners, edges, etc.) –Consistency: match nearby points (a point cannot move everywhere) correspondence problem as bipartite graph matching between two frames A, B: remove all but one connection for each point

19 E.G.M. PetrakisDynamic Vision19 Disparity computation using Relaxation Labeling: –Identify the features to be matched –E.g., corners or (generally) points i, j –Let P ij 0 be the initial probability –Disparity: d ij =(x i - x j,y i - y j ) < D max points cannot move everywhere high probabilities for points in d x with similar motion neighborhood

20 E.G.M. PetrakisDynamic Vision20 Update P ij at every iteration: For every point i, the algorithm computes {i, (dx ij,P ij ) 0, (dx ij,P ij ) 1,…. (dx ij,P ij ) n } –n: n-th iteration or frame Use correspondences with high P ij Use these for the next two frames etc.

21 E.G.M. PetrakisDynamic Vision21 n 0 1 3 4 Disparities after 3 iterations From Ballard and Brown, 94

22 E.G.M. PetrakisDynamic Vision22 disparities after Applying a relaxation labeling algorithm

23 E.G.M. PetrakisDynamic Vision23 Image flow: velocity field in the image due to motion of the observer, objects or both –Velocity vector of each pixel –Image flow is computed at each pixel –SCMO: most pixels will have zero velocity Methods: pixel and feature based methods compute image flow for all pixels or for specific features (e.g., corners) –Relaxation labeling methods –Gradient Based methods

24 E.G.M. PetrakisDynamic Vision24 Gradient Based Methods: exploit the relationship between spatial and temporal gradients of intensity –Assumption: continuous and smooth changes of intensity in consecutive frames Taylor expansion

25 E.G.M. PetrakisDynamic Vision25 Better estimation: the error term is not zero –It has to be minimum: Apply Lagrange multipliers method –f x, f y,f t : derivatives of F with regard to x,y and t –The derivative of F 2 with regard to x,y is zero –From Ballard and Brown 1984  –The computation can be unreliable at motion boundaries (e.g., occluded boundaries)

26 E.G.M. PetrakisDynamic Vision26 Turn this into an iterative method for solving u x,u y 

27 E.G.M. PetrakisDynamic Vision27 Optical flow computation for two consecutive frames (Horn & Schunck 1980) –k=0 –Initialize all u x k =u y k =0 –Repeat until some error criterion in satisfied

28 E.G.M. PetrakisDynamic Vision28 Multi-frame optical flow: compute optical flow for two frames and use this to initialize optical flow for the next frame etc. –k=0 –Initialize all u x,u y (apply the previous algorithm) –Repeat until some error criterion in satisfied

29 E.G.M. PetrakisDynamic Vision29 (a),(b),(c): three frames from a rotating sphere (d): optical flow after 32 frames from Ballard and Brown’84

30 E.G.M. PetrakisDynamic Vision30 Information in optical flow: assuming high quality computation of optical flow –Areas of smooth velocity  single surfaces –Areas with large gradients  occlusion and boundaries –The translation component of motion is directed toward the Focus Of Expansion (FOE): intersection of the directions of optical flow as seen by a moving observer –Surface structure can be derived from the derivatives of the translation component –Angular velocity can be determined from the rotational component

31 E.G.M. PetrakisDynamic Vision31 The velocity vectors of the stationary components of a scene as seen by a translating observer meet at the Focus Of Expansion (FOE)

32 E.G.M. PetrakisDynamic Vision32 Tracking: a feature or an object must be tracked over a sequence of frames –Easy to solve for single entities –For many entities moving independently requires constraints –Path coherence: the motion of an object in a frame sequence doesn't change abruptly between frames (assumes high sampling rate) The location of a point will be relatively unchanged The scalar velocity of a point will be relatively unchanged The direction of motion will be relatively unchanged

33 E.G.M. PetrakisDynamic Vision33 Deviation function for path coherence: –A trajectory of a point i: T i = –P i k : point i at k-th frame –In vector form: X i = –Deviation of the point in the k-th frame: d i k =φ(X ik-1 X ik,X ik X ik+1 ) –Deviation for the complete trajectory: –For m points (trajectories) in a sequence of n frames the total deviation is –Minimize D to find the correct trajectories

34 E.G.M. PetrakisDynamic Vision34 The trajectories of 2 points: The points in the 1 st, 2nd and 3rd frame are labeled by squares, triangles and rhombus respectively The change in the direction and velocity must be smooth

35 E.G.M. PetrakisDynamic Vision35 D is a function of φ, how φ is computed? –It is described by the function – φ can also be written as w 1,w 2 : weight terms

36 E.G.M. PetrakisDynamic Vision36 Direction coherence: the first term –Dot product of displacement vectors Speed coherence: the second term –Geometric/arithmetic mean of magnitude Limitations: same number of features in every frame –Objects may disappear, appear or occlude –Changes of geometry, illumination –Lead to false correspondences –Force the trajectories to satisfy certain local constraints


Download ppt "E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints."

Similar presentations


Ads by Google