Presentation is loading. Please wait.

Presentation is loading. Please wait.

3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.

Similar presentations


Presentation on theme: "3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by."— Presentation transcript:

1 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving

2 3D Shape from X means getting 3D coordinates from different methods shading silhouette texture stereo light striping motion mainly research used in practice

3 Perspective Imaging Model: 1D xixi xfxf f This is the axis of the real image plane. O O is the center of projection. This is the axis of the front image plane, which we use. zczc xcxc x i x c f z c = camera lens 3D object point B D E image of point B in front image real image point

4 Perspective in 2D (Simplified) P=(x c,y c, z c ) =(x w,y w,z w ) 3D object point xcxc ycyc z w =z c yiyi YcYc XcXc ZcZc xixi F f camera P´=(x i,y i,f) x i x c f z c y i y c f z c = = x i = (f/z c )x c yi = (f/z c )y c Here camera coordinates equal world coordinates. optical axis ray

5 3D from Stereo left imageright image 3D point disparity: the difference in image location of the same 3D point when projected under perspective to two different cameras. d = x left - x right

6 Depth Perception from Stereo Simple Model: Parallel Optic Axes f f L R camera baseline camera b P=(x,z) Z X image plane xlxl xrxr z z x f x l = x-b z x-b f x r = z y y f y l y r == y-axis is perpendicular to the page.

7 Resultant Depth Calculation For stereo cameras with parallel optical axes, focal length f, baseline b, corresponding image points (x l,y l ) and (x r,y r ) with disparity d: z = f*b / (x l - x r ) = f*b/d x = x l *z/f or b + x r *z/f y = y l *z/f or y r *z/f This method of determining depth from disparity is called triangulation.

8 Finding Correspondences If the correspondence is correct, triangulation works VERY well. But correspondence finding is not perfectly solved. (What methods have we studied?) For some very specific applications, it can be solved for those specific kind of images, e.g. windshield of a car. °°

9 3 Main Matching Methods 1. Cross correlation using small windows. 2. Symbolic feature matching, usually using segments/corners. 3. Use the newer interest operators, ie. SIFT. dense sparse

10 Epipolar Geometry Constraint: 1. Normal Pair of Images x y1y1 y2y2 z1z1 z2z2 C1C1 C2C2 b P P1P1 P2P2 epipolar plane The epipolar plane cuts through the image plane(s) forming 2 epipolar lines. The match for P 1 (or P 2 ) in the other image, must lie on the same epipolar line.

11 Epipolar Geometry: General Case P P1P1 P2P2 y1y1 y2y2 x1x1 x2x2 e1e1 e2e2 C1C1 C2C2

12 Constraints P e1e1 e2e2 C1C1 C2C2 1. Epipolar Constraint: Matching points lie on corresponding epipolar lines. 2. Ordering Constraint: Usually in the same order across the lines. Q

13 Structured Light 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source camera light stripe

14 Structured Light 3D Computation 3D data can also be derived using a single camera a light source that can produce stripe(s) on the 3D object light source x axis f (x´,y´,f) 3D point (x, y, z)  b b [x y z] = --------------- [x´ y´ f] f cot  - x´ (0,0,0) 3D image

15 Depth from Multiple Light Stripes What are these objects?

16 Our (former) System 4-camera light-striping stereo projector rotation table cameras 3D object

17 Camera Model: Recall there are 5 Different Frames of Reference Object World Camera Real Image Pixel Image ycyc xcxc zczc zwzw C W ywyw xwxw A a xfxf yfyf xpxp ypyp zpzp pyramid object image

18 The Camera Model How do we get an image point IP from a world point P? c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 1 s IP r s IP c s PxPyPz1PxPyPz1 = image point camera matrix C world point What’s in C?

19 The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates. 1. CP = T R WP 2. FP =  (f) CP s FP x s FP y s FP z s 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1/f 0 CP x CP y CP z 1 = perspective transformation image point 3D point in camera coordinates Why is there not a scale factor here?

20 Camera Calibration In order work in 3D, we need to know the parameters of the particular camera setup. Solving for the camera parameters is called calibration. ywyw xwxw zwzw W ycyc xcxc zczc C intrinsic parameters are of the camera device extrinsic parameters are where the camera sits in the world

21 Intrinsic Parameters principal point (u 0,v 0 ) scale factors (d x,d y ) aspect ratio distortion factor  focal length f lens distortion factor  (models radial lens distortion) C (u 0,v 0 ) f

22 Extrinsic Parameters translation parameters t = [t x t y t z ] rotation matrix r 11 r 12 r 13 0 r 21 r 22 r 23 0 r 31 r 32 r 33 0 0 0 0 1 R = Are there really nine parameters?

23 Calibration Object The idea is to snap images at different depths and get a lot of 2D-3D point correspondences.

24 The Tsai Procedure The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used. Several images are taken of the calibration object yielding point correspondences at different distances. Tsai’s algorithm requires n > 5 correspondences {(x i, y i, z i ), (u i, v i )) | i = 1,…,n} between (real) image points and 3D points. Lots of details in Chapter 13.

25 We use the camera parameters of each camera for general stereo. P P 1 =(r 1,c 1 ) P 2 =(r 2,c 2 ) y1y1 y2y2 x1x1 x2x2 e1e1 e2e2 B C

26 For a correspondence (r 1,c 1 ) in image 1 to (r2,c2) in image 2: 1. Both cameras were calibrated. Both camera matrices are then known. From the two camera equations B and C we get 4 linear equations in 3 unknowns. r 1 = (b 11 - b 31 *r 1 )x + (b 12 - b 32 *r 1 )y + (b 13 -b 33 *r 1 )z c 1 = (b 21 - b 31 *c1)x + (b 22 - b 32 *c 1 )y + (b 23 -b 33 *c 1 )z r 2 = (c 11 - c 31 *r 2) x + (c 12 - c 32 *r 2 )y + (c 13 - c 33 *r 2 )z c 2 = (c 21 - c 31 *c 2 )x + (c 22 - c 32 *c 2 )y + (c 23 - c 33 *c 2 )z Direct solution uses 3 equations, won’t give reliable results.

27 Solve by computing the closest approach of the two skew rays. V If the rays intersected perfectly in 3D, the intersection would be P. Instead, we solve for the shortest line segment connecting the two rays and let P be its midpoint. P1P1 Q1Q1 P solve for shortest V = (P 1 + a 1 *u 1 ) – (Q 1 + a 2 *u 2 ) (P 1 + a 1 *u 1 ) – (Q 1 + a 2 *u 2 ) · u 1 = 0 (P 1 + a 1 *u 1 ) – (Q 1 + a 2 *u 2 ) · u 2 = 0 u1u1 u2u2

28

29

30

31

32

33

34

35

36

37

38

39

40


Download ppt "3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by."

Similar presentations


Ads by Google