Download presentation
Presentation is loading. Please wait.
1
Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical axis and thereby moves the vertical line of light across the scene. Since only the horizontal positions of points vary and give us depth information, the vertical order of points is preserved. This allows us to compute the depth of each point along the vertical line without ambiguities. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
2
Range Imaging Through Triangulation
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
3
Range Imaging Through Triangulation
Although this method is faster, it still requires a complete horizontal scan before a depth image is complete. Maybe we should use a pattern of many vertical lines that only needs to be shifted by the distance between neighboring lines? The disadvantage of this idea is that we could confuse points in different vertical lines, i.e., associate points with incorrect projection angles. However, we can overcome this problem by taking multiple images of the same scene with the pattern in the same position. In each picture, a different subset of lines is projected. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
4
Range Imaging Through Triangulation
Then each line can be uniquely identified by its pattern of presence/absence across the images. For example, for 7 vertical lines we need a series of 3 images to do this encoding: Line #1 Line #2 Line #3 Line #4 Line #5 Line #6 Line #7 Image a off on Image b Image c Obviously, with this technique we can encode up to (n – 1) lines using log2(n) images. Therefore, this method is more efficient than the single-line scanning technique. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
5
Range Imaging Through Triangulation
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
6
The Microsoft Kinect V1 System
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
7
The Microsoft Kinect V1 System
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
8
The Microsoft Kinect V2 System
The Kinect V2 uses a time-of-flight camera to measure depth. It contains a broad IR illuminator whose light is intensity modulated: The reflected light is projected through a lens onto an array of IR light sensors. For each sensor element, the phase shift between the reflected and the currently emitted light is computed. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
9
The Microsoft Kinect V2 System
The phase shift is proportional to the distance that the light traveled and can thus be used to compute depth across the received image. Note that the measurable depth range is limited by the modulation wavelength – the system cannot decide, for example, whether 0.3 or 1.3 cycles have passed. The advantage of time-of-flight cameras is that they do not need an offset between illuminator and camera or any stereo-matching algorithm. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
10
Now we will talk about… Motion Analysis
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
11
Computer Vision Lecture 22: Motion Analysis
Motion analysis is dealing with three main groups of motion-related problems: Motion detection Moving object detection and location. Derivation of 3D object properties. Motion analysis and object tracking combine two separate but inter-related components: Localization and representation of the object of interest (target). Trajectory filtering and data association. One or the other may be more important based on the nature of the motion application. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
12
Computer Vision Lecture 22: Motion Analysis
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
13
Differential Motion Analysis
A simple method for motion detection is the subtraction of two or more images in a given image sequence. Usually, this method results in a difference image d(i, j), in which non-zero values indicate areas with motion. For given images f1 and f2, d(i, j) can be computed as follows: d(i, j) = 0 if | f1(i, j) – f2(i, j) | = 1 otherwise May 1, 2018 Computer Vision Lecture 22: Motion Analysis
14
Differential Motion Analysis
May 1, 2018 Computer Vision Lecture 22: Motion Analysis
15
Computer Vision Lecture 22: Motion Analysis
Difference Pictures Another example of a difference picture that indicates the motion of objects ( = 25). May 1, 2018 Computer Vision Lecture 22: Motion Analysis
16
Computer Vision Lecture 22: Motion Analysis
Difference Pictures Applying a size filter (size 10) to remove noise from a difference picture. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
17
Computer Vision Lecture 22: Motion Analysis
Difference Pictures The differential method can rather easily be “tricked”. Here, the indicated changes were induced by changes in the illumination instead of object or camera motion (again, = 25). May 1, 2018 Computer Vision Lecture 22: Motion Analysis
18
Differential Motion Analysis
In order to determine the direction of motion, we can compute the cumulative difference image for a sequence f1, …, fn of more than two images: Here, f1 is used as the reference image, and the weight coefficients ak can be used to give greater weight to more recent frames and thereby highlight the current object positions. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
19
Cumulative Difference Image
Example: Sequence of 4 images: 1 1 1 1 a2 = 1 a3 = 2 a4 = 4 Result: 7 6 4 May 1, 2018 Computer Vision Lecture 22: Motion Analysis
20
Differential Motion Analysis
Generally speaking, while differential motion analysis is well-suited for motion detection, it is not ideal for the analysis of motion characteristics. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
21
Computer Vision Lecture 22: Motion Analysis
Optical flow Optical flow reflects the image changes due to motion during a time interval dt which must be short enough to guarantee small inter-frame motion changes. The optical flow field is the velocity field that represents the three-dimensional motion of object points across a two-dimensional image. Optical flow computation is based on two assumptions: The observed brightness of any object point is constant over time. Nearby points in the image plane move in a similar manner (the velocity smoothness constraint). May 1, 2018 Computer Vision Lecture 22: Motion Analysis
22
Computer Vision Lecture 22: Motion Analysis
Optical flow May 1, 2018 Computer Vision Lecture 22: Motion Analysis
23
Computer Vision Lecture 22: Motion Analysis
Optical Flow The basic idea underlying most algorithms for optical flow computation: Regard image sequence as three-dimensional (x, y, t) space Determine x- and y-slopes of equal-brightness pixels along t-axis The computation of actual 3D gradients is usually quite complex and requires substantial computational power for real-time applications. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
24
Computer Vision Lecture 22: Motion Analysis
Optical Flow Let us consider the two-dimensional case (one spatial dimension x and the temporal dimension t), with an object moving to the right: t x May 1, 2018 Computer Vision Lecture 22: Motion Analysis
25
Computer Vision Lecture 22: Motion Analysis
Optical Flow Instead of using the gradient methods, one can simply determine those straight lines with a minimum of variation (standard deviation) in intensity along them: Flow undefined in these areas t x May 1, 2018 Computer Vision Lecture 22: Motion Analysis
26
Computer Vision Lecture 22: Motion Analysis
Optical flow Optical flow computation will be in error if the constant brightness and velocity smoothness assumptions are violated. In real imagery, their violation is quite common. Typically, the optical flow changes dramatically in highly textured regions, around moving boundaries, at depth discontinuities, etc. Resulting errors propagate across the entire optical flow solution. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
27
Computer Vision Lecture 22: Motion Analysis
Optical flow Global error propagation is the biggest problem of global optical flow computation schemes, and local optical flow estimation helps overcome the difficulties. However, local flow estimation can introduce large errors for homogeneous areas, i.e., regions of constant intensity. One solution of this problem is to assign confidence values to local flow estimations and consider those when integrating local and global optical flow. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
28
Block-Matching Motion Estimation
A simple method for deriving optical flow vectors is block matching. The current video frame is divided into a large number of squares (blocks). For each block, find that same-sized area in the following frame(s) with the greatest intensity correlation to it. The spatiotemporal offset between the original block and its best matches in the following frame(s) indicates likely motion vectors. The search can be limited by imposing a maximum velocity threshold. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
29
Feature Point Correspondence
Feature point correspondence is another method for motion field construction. Velocity vectors are determined only for corresponding feature points. Object motion parameters can be derived from computed motion field vectors. Motion assumptions can help to localize moving objects. Frequently used assumptions include, like discussed above: Maximum velocity Small acceleration Common motion May 1, 2018 Computer Vision Lecture 22: Motion Analysis
30
Feature Point Correspondence
The idea is to find significant points (interest points, feature points) in all images of the sequence—points least similar to their surroundings, representing object corners, borders, or any other characteristic features in an image that can be tracked over time. Basically the same measures as for stereo matching can be used. Point detection is followed by a matching procedure, which looks for correspondences between these points in time. The main difference to stereo matching is that now we cannot simply search along an epipolar line, but the search area is defined by our motion assumptions. The process results in a sparse velocity field. Motion detection based on correspondence works even for relatively long interframe time intervals. May 1, 2018 Computer Vision Lecture 22: Motion Analysis
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.