Download presentation
Presentation is loading. Please wait.
Published byBathsheba Chandler Modified over 9 years ago
1
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple
2
Contents Brief discussion to Motion Analysis Brief discussion to Motion Analysis Introduction to optical flow Introduction to optical flow Application of Detect and tracking people in complex scenes using optical flow Application of Detect and tracking people in complex scenes using optical flow
3
Usual input of a motion analysis system is a temporal image sequence Usual input of a motion analysis system is a temporal image sequence Motion analysis is often connected with real-time analysis Motion analysis is often connected with real-time analysis Part 1: Motion Analysis
4
Three main groups of motion analysis problem Motion detection: - register any detected motion - single static camera Moving object detection and location: - moving object detection only : motion_based segmentation methods - detection of a moving object, detection of the trajectory of its motion, prediction of its future trajectory: image object_matching techniques are often used to solve these tasks (direct matching of image data, matching of object features, specific representative object points (corner etc.),represent moving object as graphs and mathing these graphs); another useful method is optical flow Derivation of 3D object properties: from a set of 2D projections of acquired at different time instants of object motionDerivation of 3D object properties: from a set of 2D projections of acquired at different time instants of object motion
5
Reflects the image changes due to motion during a time interval dt, which is short enough to guarntees small inter-frame motion changes Reflects the image changes due to motion during a time interval dt, which is short enough to guarntees small inter-frame motion changes The immediate objective of optical flow is to determine a Velocity field:A 2D representation of a (generally) 3D motion is called a motion field(velocity field) Whereas each point is assigned avelocity vector corresponding the motion direction, velocity and distance from an observer at an appropriate image location The immediate objective of optical flow is to determine a Velocity field:A 2D representation of a (generally) 3D motion is called a motion field(velocity field) Whereas each point is assigned avelocity vector corresponding the motion direction, velocity and distance from an observer at an appropriate image location Based on 2 assumptions: Based on 2 assumptions: - The observed brightness of any object point is constant over time - The observed brightness of any object point is constant over time - Nearby points in the image plane move in a similar manner(velocity smoothness constraint) - Nearby points in the image plane move in a similar manner(velocity smoothness constraint) Part2: Optical flow
6
Optical flow Eg:http://www.ai.mit.edu/people/lpk/mars/temizer_2001/Optical_F low/index.htmlhttp://www.ai.mit.edu/people/lpk/mars/temizer_2001/Optical_F low/index.html
7
Let us suppose we have a continuous image, the image intensity is given by f(x,y,t), where the intensity is now a function of time t, as well as of x and y. If this point(x,y) moves to a point (x+dx,y+dy) at time t+dt, the following equation holds: Taylor expansion of the right side of the equation (1) is1 Where fx(x,y,t),fy(x,y,t),ft(x,y,t) denote the partial derivation of f. And e is the high-order term in Tylor series. Computation Rationale
8
Assuming that e is negligible, we obtain the next equation: That means
9
Computation Method
10
Optical flow in motion analysis Motion, as it appears in dynamic images, is usually some combination of 4 basic elements: Motion, as it appears in dynamic images, is usually some combination of 4 basic elements: (a)Translation at constant distance from the observer. ---parallel motion vectors ---parallel motion vectors (b)Translation in depth relative to the observer. ---Vectors having common focus of expansion. ---Vectors having common focus of expansion. (c) Rotation at constant distance from view axis. ---concentric motion vectors. ---concentric motion vectors. (d) Rotation of planar object perpendicular to the view axis. ---- vectors starting from straight line segments. ---- vectors starting from straight line segments.
11
Optical flow in motion analysis Mutual velocity of an observer and an object Let mutual velocities be (u,v,w) at direction x,y,z.(z represent the depth) if (x0,y0,z0) is the position at time t0=0.then the position of the same point at time t can be determined as: FOE (focus of expansion) determination: Distance(depth) determination Collision Prediction
12
Part 3 Experiment of detecting and tracking people in complex scenes using optical flow (by saitama univ)
13
Demand Automatic visual surveillance systems are strongly demanded for various applications. We have several systems commercially available, most of which are based on subtraction between consecutive frames or that between a current image and a stored background image. They can work as expected if environmental conditions do not change, such as indoors. However, they cannot work outdoors because there are various disturbances such as changes of lighting and movements of background objects.
15
By applying two different spatial filters g,h to the input image, the following two constraint equations are derived. By applying two different spatial filters g,h to the input image, the following two constraint equations are derived. Two orientation_selective spatial Gaussian filters g, h applied to the original image f(x,y,t): one is sensitive to vertical edges, one is to horizental edges. Two orientation_selective spatial Gaussian filters g, h applied to the original image f(x,y,t): one is sensitive to vertical edges, one is to horizental edges. (u,v) denotes an optical flow vector and subscript denotes partial differentiation (u,v) denotes an optical flow vector and subscript denotes partial differentiation First step: compute the optical flow
17
segment the flow image into uniform flow regions in a split-and- merge fashion. First, we divide the image into 16 (4 X 4) regions, calculating the mean flow vector in each region. If the region has any outlier subregions whose flow vectors are different from the mean, the region is further split into 4 (2 X 2) regions. If the region has no outlier subregion, that is, the region has a uniform flow, it will not be split. The above process is repeated to each region until it becomes too small to be split segment the flow image into uniform flow regions in a split-and- merge fashion. First, we divide the image into 16 (4 X 4) regions, calculating the mean flow vector in each region. If the region has any outlier subregions whose flow vectors are different from the mean, the region is further split into 4 (2 X 2) regions. If the region has no outlier subregion, that is, the region has a uniform flow, it will not be split. The above process is repeated to each region until it becomes too small to be split Second step: Region Segmentation
19
We prepare a four-dimensional voting space ( )For each uniform flow region detected in the previous process, we predict a path of the region in a certain time interval of future. Fig. shows the predicted path(only x-y-t are shown). We assume that the region continues to move in the direction of the mean flow vector ( u,v ) at its speed. We approximate each region by an ellipse whose center coincides with the region centroid. Every point inside the ellipse is given weight, according to the two dimensional Gaussian as shown in Fig. 3(a). This weight is voted at the predicted position (x,y) at the time (t) in the direction ( ). We prepare a four-dimensional voting space ( )For each uniform flow region detected in the previous process, we predict a path of the region in a certain time interval of future. Fig. shows the predicted path(only x-y-t are shown). We assume that the region continues to move in the direction of the mean flow vector ( u,v ) at its speed. We approximate each region by an ellipse whose center coincides with the region centroid. Every point inside the ellipse is given weight, according to the two dimensional Gaussian as shown in Fig. 3(a). This weight is voted at the predicted position (x,y) at the time (t) in the direction ( ). The voted result is compared with a threshold. If there is any region whose number of votes is over the threshold, the region is detected as a target. The voted result is compared with a threshold. If there is any region whose number of votes is over the threshold, the region is detected as a target. Third step: Predicted Path Voting
22
Reference Image processing, analysis, and machine vision Image processing, analysis, and machine vision Detecting and tracking people in complex scenes http://www-cv.mech.eng.osaka- u.ac.jp/research/tracking_group/iketani/r esearch_e/node1.html
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.