Download presentation
Presentation is loading. Please wait.
1
3D Vision Yang Wang National ICT Australia
Computer Vision 3D Vision Yang Wang National ICT Australia
2
Introduction Single camera Two camera Range Sensor Examples
Perspective projection Camera parameters Two camera Depth computation Epipolar geometry Range Sensor Examples
3
Coordinate Transformation
PB=R(PA-t) PA= (xA,yA,zA)T, PB= (xB,yB,zB)T R: 3×3 rotation matrix t: 3×1 translation vector
4
Homogenous Coordinate
PB=R(PA-t)=RPA-Rt XB=TXA
5
Pinhole Camera Pinhole perspective projection
Mathematically simple and convenient
6
The Perspective Imaging Model
1-D case
7
The Perspective Imaging Model
P=(xc,yc,zc), p=(u,v) u=(f/zc)xc, v=(f/zc)yc
8
Single Perspective Camera
Image affine coordinate system u=fxc/zc, v=fyc/zc u=a(fxc/zc) +b(fyc/zc)+x0 v=c(fyc/zc)+y0
9
Intrinsic Parameters U=KXc K: intrinsic parameters, zc=α
10
Extrinsic Parameters Xc, Xw: 3D camera/world coordinate Xc=R(Xw-t)
R,t: extrinsic parameters
11
Projective Matrix Xc=R(Xw-t), U=KXc=KR(Xw-t) Projective matrix U=MX
12
Single Camera Calibration
U=MX Generally, require 6 pairs of (ui,vi) and (xi,yi,zi) to solve M
13
Single Camera Calibration
For each pair of (ui,vi) and (xi,yi,zi)
14
Depth Perception from Stereo
Simple stereo system X -axis are collinear and Y-axis and Z-axis are parallel Disparity d=xl-xr refers to the difference in the image location of the same 3-D point
15
Correspondence Problem
left right left depth
16
Depth Perception from Stereo
Establishing Corresponding The most difficult part of a stereo vision system is not the depth calculation, but the determination of the correspondences used in the depth calculation Cross correlation For pixel P of image I1, the selected region of I2 is searched to find a pixel that maximises the response of the cross correlation operator Symbolic matching and relational constraints Look for a feature in one image that matches a feature in the other Typical features used are junctions, line segments or regions
17
Epipolar geometry Baseline Epipole Epipolar plane Epipolar line
18
Depth Perception from Stereo
The epipolar constraint The 2-dimentional search space for the point in one image that corresponds to a given point in a second image I reduced to a 1-dimensional search b the so called epipolar geometry of the image pair The plane that contains the 3-D point P, the two point centres (or cameras) C1 and C2, and the two image points P1 and P2 to which P projects is called the epipolar plane The two lines e1 and e2 resulting from the intersection of the epipolar plane with the two image planes I1 and I2 are called epipolar lines The epipole of an image of a stereo pair is the point at which all its epipolar lines intersect Given the point P1 on epipolar line e1 in image I1 and the relative orientations of the cameras, the corresponding epipolar line e2 in image I2 on which the corresponding point P2 must lie can be found
19
Depth Perception from Stereo
The ordering constraint Given a pair of points in the scene and their corresponding projections in each of the two images, the ordering constraint states that if these points lie on a continuous surface in the scene, they will be ordered in the same way along the epipolar lines in each of the image Error versus Coverage Increasing the base line improves accuracy but decreasing coverage of correspondences
20
General Stereo Configuration
U=KX, U'=K'X' X: position of P in left camera coordinate X': position of P in right camera coordinate U,U': position of p1 and p2 in left/right image K,K': intrinsic parameters of left/right camera
21
Fundamental Matrix U=KX, U'=K'X', X'=R(X-t) Coplanarity: XT(t×X')=0
R,t: rotation and translation Coplanarity: XT(t×X')=0 UT(K-1)Tt×R-1(K')-1U'=0 UTFU'=0 Fundamental matrix F=(K-1)Tt×R-1(K')-1
22
Essential Matrix Given intrinsic parameters: K and K'
(K-1U)Tt×R-1(K')-1U'=0 V=K-1U, V'=(K')-1U' VTEV'=0 Essential matrix E=t×R-1
23
Depth Perception from Stereo
Canonical configuration Image rectification
24
Range Sensor LIDAR RADAR Structured light Light detection & ranging
Radio detection & ranging Structured light
25
Time-of-Flight Camera
Comparison
26
3-D Cues Available in 2-D Images
An image is a 2-D projection of the world Cues exist in 2-D images to interpret the 3-D world Interposition occurs when one object occludes another object, thus indicating that the occluding object is closer to the viewer than the occluded object Perspective scaling indicates that the distance to an object is inversely proportional to its image size
27
3-D Cues Available in 2-D Images
Texture gradient is the change of image texture along some direction in the image Motion parallax indicates the images of closer objects will move faster than the images of distant objects
28
Other Phenomena Shape from shading
Smooth objects often present a highlight at points where a reception from the light source makes equal angles with refection toward the view while get increasingly darker as the surface normal becomes perpendicular to rays of illumination Only expected to work well by itself in highly controlled environments
29
Other Phenomena Shape from texture Shape from Sihouette
Whenever texture is assumed to lie on a single 3-d surface and to be uniform, texture gradient in 2-D can be used to computer 3-D orientation of the surface Shape from Sihouette Extracts the sihouettes of an object using mutiple images with known camera orientation so that 3D shape of the object can be reconstructed.
30
Other Phenomena Depth from Focus Motion Phenomena
By bringing an object into focus, the sensor obtains information on the range to that object Motion Phenomena When a moving visual sensor pursues an object in 3-D, points on that object appear to expand in the 2-D image as the sensor closes in on the object Boundary and Virtual Lines Virtual lines or curves are formed by a compelling grouping of similar points or objects along an image line or curve
31
Other Phenomena Vanishing Points
A 3-D line skew to optimal axis will appear to vanish at a point in the 2-D image Vanishing lines are formed by the vanishing points from different groups of lines parallel to the same plane A horizon line is formed from the vanishing points of different group of parallel lines on the ground plane Using these principles, 3-D models of scenes from an ordinary video taken from several viewpoints in the scene can be built
32
Example 1: Traffic Monitoring
Road traffic monitoring Wang 2006 Road surface is level No camera misalignment
33
Traffic Monitoring Road and camera geometry Camera height: H
Camera angles: tilt and pan Focal length: f
34
Traffic Monitoring U=KR(X-t) Translation Rotation
Ignoring affine parameters Translation t=(0,0,H)T Rotation
35
Traffic Monitoring Mapping from ground to image Reverse transformation
36
Traffic Monitoring Camera view simulation Camera setting Camera height
Focal length Camera setting Roadside/on-street Lane/intersection
37
Example 2: Segmentation
Bi-layer segmentation Kolmogorov et al. 2005 Two layers: Foreground and background Task: Accurately segment foreground objects with two cameras
38
Bi-layer Segmentation
Stereo Foreground color has large disparity Color/contrast Bg/Fg have distinct color distributions Coherence Spatial/temporal Proabilistic approach p(label|disparity,data)
39
Bi-layer Segmentation
Color/contrast+coherence left Stereo+coherence right
40
Bi-layer Segmentation
Fuse stereo and color/contrast Stereo and color complement each other Background substitution
41
Example 3: Make3D Depth from a single image
Learn the relations between various parts of image, and uses monocular cues to learn the depths from data (Saxena et al. 2008)
42
Make 3D Approach Over-segment image into superpixel
Infer 3-D location/orientation of superpixel
43
Make 3D Image properties
Local feature: For a particular region, are the image features strong indicators of the 3D depth/orientation? Co-planarity: Except in case of occlusion, neighboring planes are more likely to be connected to each other. Co-linearity: Long straight lines in the image represent straight lines in 3D.
44
Make 3D Local features Texture/gradient Color channels Neighbours
Scales
45
Make 3D Co-planarity Co-linearity C B Nice! A
46
Make 3D Experimental results Image Estimated As backup
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.