Image Based Rendering an overview. 2 Photographs We have tools that acquire and tools that display photographs at a convincing quality level.

Slides:



Advertisements
Similar presentations
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Advertisements

Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Image and View Morphing [Beier and Neely ’92, Chen and Williams ’93, Seitz and Dyer ’96]
Stereo.
Dr. Hassan Foroosh Dept. of Computer Science UCF
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Announcements. Projection Today’s Readings Nalwa 2.1.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
CS485/685 Computer Vision Prof. George Bebis
Face Recognition Based on 3D Shape Estimation
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Stereo and Multiview Sequence Processing. Outline Stereopsis Stereo Imaging Principle Disparity Estimation Intermediate View Synthesis Stereo Sequence.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
Computational Photography Light Field Rendering Jinxiang Chai.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
112/5/ :54 Graphics II Image Based Rendering Session 11.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
CSE 185 Introduction to Computer Vision
3D Rendering 2016, Fall.
Rendering Pipeline Fall, 2015.
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Motion and Optical Flow
Image-Based Rendering
© 2005 University of Wisconsin
Common Classification Tasks
Multiple View Geometry for Robotics
Filtering Things to take away from this lecture An image as a function
Course 6 Stereo.
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Image Based Rendering an overview

2 Photographs We have tools that acquire and tools that display photographs at a convincing quality level

3

4

5

6

7

8

9

10

11 Photographs We have tools that acquire and tools that display photographs at a convincing quality level, for almost 100 years now

12 Sergei Mikhailovich Prokudin-Gorskii. A Settler's Family, ca

13 Sergei Mikhailovich Prokudin-Gorskii. Tea Factory in Chakva. Chinese Foreman Lau-Dzhen-Dzhau. ca

14 Sergei Mikhailovich Prokudin-Gorskii. The Emir of Bukhara, 1911.

15 RGB in early 1900’s

16 Plenoptic function Defines all the rays –through any point in space (x, y, z) –with any orientation (θ, φ) –over all wavelenghts (λ) –at any given moment in time (t)

17 IBR summary explicitimplicit geometric model texture mappingpanoramasview morphing 3D image warping ray databases Representation of plenoptic function

18 Lightfield – Lumigraph approach [Levoy96, Gortler96] Take all photographs you will ever need to display Model becomes database of rays Rendering becomes database querying

19 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

20 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

21 From 7D to 4D Static scene, t constant λ approximated with RGB consider only convex hull of objects, so the origin of the ray does not matter

22 4D Lightfield / Lumigraph

23 Discreet 4D Lightfield

24 Lightfield: set of images with COPs on regular grid

25 or Lightfield: set of images of a point seen at various angles

26 Depth correction of rays

27 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

28 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

29 Construction from dense set of photographs

30 Construction from sparse set of photographs acquisition stage camera positions blue screeningspace carving

31 Filling in gaps using pull-push algorithm Pull phase low res levels are created gaps are shrunk Push phase gaps at high res levels are filled using low res levels

32 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

33 Overview Introduction Lightfield – Lumigraph –definition –construction –compression

34 Compression Large size uncompressed: 1.125GB –32x32 (s, t) x 256x256 (u, v) x 6 faces x 3 B Compression –jpeg + mpeg (200:1 to 6MB) –or vector quantization + entropy encoding

35 Vector Quantization (VQ) Principle –codebook made of codewords –replace actual word with closest codeword Implementation –training on representative set of words to derive best codebook –compression: replacing word with index to closest codeword –decompression: retrieve indexed codeword from codebook

36 Lightfield compression using VQ

37 View morphing

38 Motivation – rendering from images Given –left image –right image Create intermediate images –simulates camera movement [Seitz96]

39 Previous work Panoramas ([Chen95], etc) –user can look in any direction at few given locations Image-morphing ([Wolberg90], [Beier92], etc) –linearly interpolated intermediate positions of features –input: two images and correspondences –output: metamorphosis of one image into other as sequence of intermediate images

40 Previous work limitations Panoramas ([Chen95], etc.) –no camera translations allowed Image morphing ([Wolberg90], [Beier92], etc.) –not shape-preserving –image morphing is also a morph of the object –to simulate rendering with morphing, the object should be rigid when camera moves

41 Overview Introduction Image morphing View morphing –image pre-warping –image morphing –image post-warping

42 Overview Introduction Image morphing View morphing –image pre-warping –image morphing –image post-warping

43 Image morphing 1.Correspondences

44 Image morphing 1.Correspondences

45 Image morphing 1.Correspondences

46 Image morphing 1.Correspondences

47 Image morphing 1.Correspondences 2.Linear interpolation P0P0 PkPk PnPn frame 0frame kframe n

48 Image morphing not shape preserving

49 Early IBR research Soft watch at moment of first explosion – Salvador Dali 1954

50 Overview Introduction Image morphing View morphing –image pre-warping –image morphing –image post-warping

51 Overview Introduction Image morphing View morphing –image pre-warping –image morphing –image post-warping

52 View morphing Shape preserving morph Three step algorithm 1.Prewarp first and last images to parallel views 2.Image morph between prewarped images 3.Postwarp to interpolated view

53 Step 1: prewarp to parallel views Parallel views –same image plane –image plane parallel to segment connecting the two centers of projection Prewarp –compute parallel views I 0p, I np –rotate I 0 and I n to parallel views –prewarp corrs. (P 0, P n ) -> (P op, P np ) CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np

54 Step 2: morph parallel images Shape preserving Use prewarped correspondences Interpolate C k from C 0 C n CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np CkCk P kp

55 Step 3: Postwarping Postwarp morphed image –create intermediate view C k is known interpolate view direction and tilt –rotate morphed image to intermediate view CnCn C0C0 I0I0 InIn I 0p CkCk

56 View morphing shape preserving

57 Overview Introduction Image morphing View morphing, more details –image pre-warping –image morphing –image post-warping

58 Step 1: prewarp to parallel views Parallel views –use C 0 C n for x (a p vector) –use (a 0 x b 0 ) x (a n x b n ) as y (-b p ) –pick a p and b p to resemble a 0 b 0 as much as possible –use same pixel size –use wider field of view CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np

59 Step 1: prewarp to parallel views prewarping using texture mapping –create polygon for image plane –consider it texture mapped with the image itself –render the “scene” from prewarped view –if you go this path you will have to implement clipping with the COP plane –you have texture mapping already alternative: prewarping using reprojection of rays –look up all the rays of the prewarped view in the original view CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np

60 Step 1: prewarp to parallel views prewarping correspondences –for all pairs of correspondence P 0 P n project P 0 on I 0p, computing P 0p project P n on I np, computing P np prewarped correspondence is P op P np CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np

61 Step 2: morph parallel images Image morphing –use prewarped correspondences to compute a correspondence for all pixels in I 0p –linearly interpolate I 0p to intermediate positions –useful observation corresponding pixels are on same line in prewarped views –preventing holes use larger footprint (ex 2x2) or linearly interpolate between consecutive samples or postprocess morphed image looking for background pixels and replacing them with neighboring values –visibility artifacts collision of samples –zbuffer on disparity holes –morph I np to I kp –use additional views CnCn C0C0 I0I0 InIn I np I 0p P P0P0 PnPn P 0p P np CkCk P kp

62 Step 3: Postwarping create intermediate view –C k is known –current view direction is a linear interpolation of the start and end view directions –current up vector is a linear interpolation of the start and end up vectors rotate morphed image to intermediate view –same as prewarping CnCn C0C0 I0I0 InIn I 0p CkCk

63 Image-Based Rendering by Warping

64 Overview Introduction Depth extraction methods Reconstruction for IBRW Visibility without depth Sample selection

65 Overview Introduction –comparison to other IBR methods –3D warping equation –reconstruction

66 IBR by Warping (IBRW) Images enhanced with per-pixel depth [McMillan95] P1P1

67 IBR by Warping (IBRW) P1P1 1/w 1 also called generalized disparity another notation δ(u 1, v 1 )

68 IBR by Warping (IBRW) P1P1 P2P2

69 3D warping equations

70 A complete IBR method Façade system –(coarse) geometric model needed Panoramas –viewer confined to center of panorama View morphing –correspondences needed IBRW –rendering to arbitrary new views

71 DeltaSphere - depth&color acquisition device Lars Nyland et al.

72

73

74

75

76 Reconstructing by splatting Estimate shape and size of footprint of warped samples –expensive to do accurately –lower image quality if crudely approximated Samples are z-buffered

77 Overview Introduction Depth extraction methods Reconstruction for IBRW Visibility without depth Sample selection

78 Finding depth

79 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

80 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

81 Depth from stereo P1P1 P2P2 two cameras with known parameters infer 3D location of point seen in both images sub problem: correspondences for a point seen in the left image, find its projection in the right image

82 Depth from stereo: déjà-vu math P1P1 P2P2 unknowns are w 1 and w 2 overconstrained system the u 2 v 2 coordinates of a point seen at u 1 v 1 are constrained to an epipolar line

83 Epipolar line P1P1 P2P2 C 1, C 2, P 1 define a plane P 2 will be on that plane P 2 is also on the image plane 2 So P 2 will be on the line defined by the two planes’ intersection

84 Search for correspondences on epipolar line P1P1 P2P2 Reduces the dimensionality of the search space Walk on epipolar segment rather than search in entire image

85 Parallel views Preferred stereo configuration epipolar lines are horizontal, easy to search

86 Parallel views Limit search to epipolar segment from u 2 = u 1 (P is infinitely far away) to 0 (P is close)

87 Depth precision analysis 1/z linear with disparity (u 1 – u 2 ) better depth resolution for nearby objects important to determine correspondences with subpixel accuracy

88 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

89 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

90 Depth from stereo problem Correspondences are difficult to find Structured light approach –replace one camera with projector –project easily detectable patterns –establishing correspondences becomes a lot easier

91 Depth from structured light P1P1 P2P2 C 1 is a projector Projects a pattern centered at u 1 v 1 Pattern center hits object scene at P Camera C 2 sees pattern at u 2 v 2, easy to find 3D location of P is determined

93

94 Depth from structured light challenges Associated with using projectors –expensive, cannot be used outdoors, not portable Difficult to identify pattern –I found a corner, which corner is it? Invasive, change the color of the scene –one could use invisible light, IR

95 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

96 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

97 Depth of field aperture object image CF F’ Thin lenses rays through lens center (C) do not change direction rays parallel to optical axis go through focal point (F’)

98 Depth of field aperture object image plane CF F’ For a given focal length, only objects that are at a certain depth are in focus

99 Out of focus aperture object image plane CF F’ When object at different depth One point projects to several locations in the image Out of focus, blurred image

100 Focusing aperture object image plane CF F’ Move lens to focus for new depth Relationship between focus and depth can be exploited to extract depth

101 Determine z for points in focus aperture object image plane CF F’ hihi h af z

102 Depth from defocus Take images of a scene with various camera parameters Measuring defocus variation, infer range to objects Does not need to find the best focusing planes for the various objects Examples by Shree Nayar, Columbia U

103 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

104 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders

105 Laser range finders Send a laser beam to measure the distance –like RADAR, measures time of flight

106 DeltaSphere - depth&color acquisition device Lars Nyland et al. courtesy 3 rd Tech Inc.

o x 300 o panorama this is the reflected light

o x 300 o panorama this is the range light

109 courtesy 3 rd Tech Inc. spherical range panoramas planar re-projection

110 courtesy 3 rd Tech Inc. Jeep – one scan

111 courtesy 3 rd Tech Inc. Jeep – one scan

112 courtesy 3 rd Tech Inc. Complete Jeep model

113