Presentation is loading. Please wait.

Presentation is loading. Please wait.

Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.

Similar presentations


Presentation on theme: "Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002."— Presentation transcript:

1 Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002

2 Problem Statement How can one extract information from a sequence of images without camera calibration? Sequence of images World Model

3 Panoramic Mosaicing Image Sequence

4 Projective Depth Recovery d a1 d b1 a b 12 3 Image Sequence

5 Ambiguity What does the 3-D structure look like?

6 Ambiguity What does the 3-D structure look like? d1d1 d2d2 d3d3 d4d4

7 Ambiguity What does the 3-D structure look like? d1d1 d2d2 d3d3 d4d4 The projective depth is defined up to a projective transform.

8 Ambiguity What does the 3-D structure look like?

9 Outline Image transformations. Direct methods for image registration. Mosaic construction. Projective depth recovery.

10 2D Transformations Planar Scene Image Plane How does the square look when we move the camera?

11 Types of Transformations Rigid Rigid+Scaling Affine Projective 2D Examples

12 Mathematically The 2-D planar transformation p’ of point p is p’p’ p M 2D =

13 3-D Rigid Transformations 3-D Scene Rotation + translation u u’u’ How do u and u’ relate?

14 Mathematically Calculate the world coordinate p from an image coordinate u: Calculate image coordinate u’ from p: Viewing matrix M

15 Panoramas What if the optical center of the camera does not move? The images are related by a homography.

16 Direct Image Registration For a pair of images, I and I’, minimize the intensity discrepancy

17 Nonlinear Iterative Minimization At the minimum of E, Jacobian

18 Nonlinear Iterative Minimization At the minimum of E, Given some estimate of find s.t. Update :, where Hessian

19 Levenberg-Marquardt The Hessian is hard to calculate. Approximate it as Levenberg-Marquardt finds locally optimal solutions.

20 Hierarchical Matching Pyramid of image I Pyramid of image I’ Image I’ Image I Subsample Calculate M a M a M b M b M c M c M

21 Mosaic Construction How do we stitch together the registered images? One approach: Choose one frame’s coordinate system. Warp all frames into that coordinate system. Blend together overlapping sections by averaging.

22 Mosaic Constuction Example

23

24 Environment Maps Color each face a different color. Unroll into a 2D texture map

25 Environment Maps Expand each face.

26 Environment Maps For each face, determine the mapping to world coordinates s.t. u1u1 u2u2 u3u3 u4u4 p1p1 p2p2 p3p3 u i = M F p i

27 Environment Maps Warp each image to the coordinate system of each face. For each face, form a blended image. Paint the blended image faces into the 2D texture map. This method can be performed for arbitrary surfaces, including a tesselated sphere.

28 Cubical Environment Map

29

30 Tesselated Sphere Results

31 Blending How do we choose pixel values in the mosaic where the images overlap? Superimposing method: Inconsistencies at frame boundaries

32 Blending How do we choose pixel values in the mosaic where the images overlap? Weighted averaging:

33 Blending How do we choose pixel values in the mosaic where the images overlap? Weighted averaging: Overlapping images are averaged, weighted by distance from the center.

34 Blending How do we choose pixel values in the mosaic where the images overlap? Multi-resolution blending: Overlapping images are averaged, weighted by proximity to desired zoom.

35 Projective Depth Recovery Earlier, we saw that if two cameras are related by a 3-D rigid transformation,

36 Projective Depth Recovery Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Image coordinate in I Homography Viewing matrix for I’ Projective depth Translation from O to O’

37 Projective Depth Recovery Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Image coordinate in I Homography Projective depth Parallax motion

38 Projective Depth Recovery Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Image coordinate in I Homography Projective depth Parallax motion

39 Algorithmic Idea Choose a base frame I 0 to recover projective depth in. Find M j,, and d i to minimize using nonlinear iterative minimization.

40 Ambiguity

41

42 Number of Parameters How many parameters must we estimate? (8 + 3) n + p, where n is the number of images and p is the number of pixels. p is large, so the depth map is represented using a tensor-product spline.

43 Splines Pixel ( i,j ) Spline control vertex Depth Map Spline Depth i j Let the depths at the control vertices vary. Find the depths at all points by interpolation.

44 Local Minima The high dimensionality of the search space increases the chance of finding a nonglobal optimum. One solution is to initialize the dense algorithm with the results of a feature- based algorithm.

45 Feature-Based Algorithm Detect features, for example corners, in each frame. Find between-frame feature correspondences.

46 Feature-Based Algorithm Now we have the locations of each feature i in each frame j, v ij. Find the transformation ( M j,, d i ) to minimize through nonlinear iterative minimization.

47 Feature-Based Algorithm Now we have the locations of each feature i in each frame j, v ij. Find the transformation ( M j,, d i ) to minimize through nonlinear iterative minimization. Inverse variance weight Location of feature i in base frame

48 Algorithm Initialization Simple approach: initialize Faster approach: Fixand, then solve for M j. Estimate and

49 View Interpolation Input image Virtual camera Novel image Input image

50 View Interpolation We can approximate the Euclidean depth map from the projective depth map. From the Euclidean depth map, we can synthesize novel views of a scene.

51 View Interpolation Results Image sequence taken by moving camera up and down Dense depth map Novel interpolated view

52 View Interpolation Results Input image sequence Dense depth map Novel interpolated view

53 Summary Direct methods can be used to register image frames if between-frame motion is small without camera calibration. Once the frames are registered, mosaics can be constructed if the camera doesn’t translate. Projective depth can be recovered from a sequence of images if the camera translates. Euclidean depth can be estimated from projective depth. Novel views can be synthesized from Euclidean depth.

54 References R. Szeliski, P. Anandan, K. Toyama, and L. Kanade,. Notes from Vision for Graphics, CSE 590SS, University of Washington, Winter 2001. R. Szeliski and S. B. Kang. Direct methods for visual scene reconstruction. In IEEE Workshop on Repres. of Visual Scenes, pages 26-33, June 1995. R. Szeliski. Image Mosaicing for Tele-Reality Applications. Technical Report 94/2, Digital Equipment Corporation, Cambridge Research Lab, June 1994. R. Szeliski and J. Coughlan. Spline-Based Image Registration. Technical Report 94/1, Digital Equipment Corporation, Cambridge Research Lab, April 1994. R. Szeliski and S. B. Kang. Recovering 3D shape and motion from image streams using non-linear least squares. Journal of Visual Communication and Image Representation, Vol. 5(1), pages 10-28, 1994.

55 References R. Szeliski and H. Shum. Creating full view panoramic image mosaics and environment maps. In Proc. of SIGGRAPH, pages 251-258, 1997. H. Shum and R. Szeliski. Construction of Panoramic Image Mosaics with Global and Local Alignment. In ICCV Vol. 36(2), pages 101-130, 2000. D. Capel and A. Zisserman. Automated Mosaicing with Super- Resolution Zoom. In Proc. CVPR, pages 885-891, Jun 1998. M. Irani and P. Anandan. All about direct methods. In W. Triggs, A. Zisserman, and R. Szeliski, editors, Vision Algorithms: Theory and practice. Springer-Verlag, 1999. R. Szeliski. Video Mosaics for Virtual Environments. In IEEE Computer Graphics, Volume 16(2), pages 22-30, March 1996.


Download ppt "Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002."

Similar presentations


Ads by Google