Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unstructured Lumigraph Rendering

Similar presentations


Presentation on theme: "Unstructured Lumigraph Rendering"— Presentation transcript:

1 Unstructured Lumigraph Rendering
Chris Buehler Michael Bosse Leonard McMillan MIT-LCS Steven J. Gortler Harvard University Michael F. Cohen Microsoft Research

2 The Image-Based Rendering Problem
Synthesize novel views from reference images Static scenes, fixed lighting Flexible geometry and camera configurations In this paper, we’re interested in solving the IBR problem: given a set of reference images, we’d like to synthesize new views of a scene. We make the assumptions that the scene is static and the lighting is fixed. Further, we’d like to be able to do this with few constraints on the position or number of input images. For example, we’d like to take a video sequence like this one and render new views from it.

3 The ULR Algorithm Designed to work over a range of image and geometry configurations Designed to satisfy desirable properties ULR LF # of Images Previous image-based rendering approaches, such as light field rendering and vdtm, have been designed to work with either lots of input images or a high level of geometric detail. Our algorithm, on the other hand, works with a range of image and geometry configurations. To achieve this flexibility, we designed our algorithm to satisfy certain desired properties. I’ll now describe those properties while reviewing some of the previous work. Now there have been many previous algorithms that solve the image-based rendering problem. However, most of them have been designed to work either lots of images and low geometric fidelity, or with few images and high geometric fidelity. Our algorithm, on the other hand, has been designed to work well over a range of inputs. In developing this algorithm, we extracted what we felt were the most desirable properties from previous algorithms. Now I’ll take a moment to describe these properties. In this talk, I am going to describe our approach to this rendering probem, unlike previous methoids, which were designed to work with either lots of images or good geometric knowledge, (clikck) we hav designe our alogithm to work well over a range of inputs. Not surprisingly, our algorithm borrows heavily from much of the previous work. In order to motiviate which elements to borrow, we first explore some of the desireable prorties found in them VDTM Geometric Fidelity

4 “Light Field Rendering,” SIGGRAPH ‘96
u s u0 s0 A Light field is a 4D representation of the colors along all rays in a volume. Typically, rays are parameterized according to their intersection with two parallel planes. Since I’m working in two dimensions here, I can parameterize a ray by it’s intersections with two lines. Then, to reconstruct a desired view, I can just intersect all viewing rays with these lines and look up the corresponding colors. In practice, we have discrete light fields, which can be thought of of an array of cameras arranged in a regular grid. Rays that fall in between the cameras are interpolated from nearby cameras. Note that , in general, this interpolation depends on the position of the U plane. However, some rays are special, and can be interpolated without regard for the U plane. These are the rays that are directly seen by the cameras. Desired Camera Desired color interpolated from “nearest cameras”

5 “Light Field Rendering,” SIGGRAPH ‘96
u s Desired Property #1: Epipole consistency A Light field is a 4D representation of the radiance along all rays in a volume. Typically, rays are parameterized according to their intersection with two parallel 3D planes. Here I’ll consider a simple example in the plane, where rays are represented by their intersections with 2 lines. Let’s consider a virtual camera /* uv,st labels on the planes (lines)? */ Desired Camera

6 “The Lumigraph,” SIGGRAPH ‘96
“The Scene” u Potential Artifact Desired Camera

7 “The Lumigraph,” SIGGRAPH ‘96
“The Scene” Desired Property #2: Use of geometric proxy Desired Camera

8 “The Lumigraph,” SIGGRAPH ‘96
“The Scene” Desired Camera

9 Note: all images are resampled.
“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Property #3: Unstructured input images Rebinning Note: all images are resampled. Desired Camera

10 “The Lumigraph,” SIGGRAPH ‘96
“The Scene” Desired Property #4: Real-time implementation Desired Camera

11 View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98
“The Scene” Occluded Out of view Desired Camera

12 View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98
“The Scene” Desired Property #5: Continuous reconstruction Desired Camera

13 View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98
“The Scene” θ1 θ3 θ2 Desired Camera

14 View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98
“The Scene” θ1 θ3 Desired Property #6: Angles measured w.r.t. proxy θ2 Desired Camera

15 “The Scene” Desired Camera

16 “The Scene” Desired Property #7: Resolution sensitivity Desired Camera

17 Previous Work Light fields and Lumigraphs
Levoy and Hanrahan, Gortler et al., Isaksen et al. View-dependent Texture Mapping Debevec et al., Wood et al. Plenoptic Modeling w/Hand-held Cameras Heigl et al. Many others… I’ve covered lfs, lumis, and vdtm is some detail. Of course, there are many other IBR algorithms. However, none of them quite satisfies all of the properties that I’ve outlined previously.

18 Unstructured Lumigraph Rendering
Epipole consistency Use of geometric proxy Unstructured input Real-time implementation Continuous reconstruction Angles measured w.r.t. proxy Resolution sensitivity Unstructured Lumigraph Rendering, on the other hand, addresses all of these properties in a single algorithm. I should point out that it’s

19 Blending Fields colordesired = Σ wi colori i Desired Camera
In order to describe our approach, I will first describe a little something we call the blending field. Desired Camera colordesired = Σ wi colori i

20 Blending Fields colordesired = Σ w(ci) colori i Desired Camera
In order to describe our approach, I will first describe a little something we call the blending field. Desired Camera colordesired = Σ w(ci) colori i

21 Unstructured Lumigraph Rendering
Explicitly construct blending field Computed using penalties Sample and interpolate over desired image Render with hardware Projective texture mapping and alpha blending You can compute blending fields for any image-based rendering algorithm. ULR is unique in that it explicitly computes and uses the blending field for rendering. For efficient rendering, we sample the blending field and interpolate over the desired image plane. We can then render the lumigraph using projective texture mapping and alpha blending. We compute the blending field based on penalties that are assigned to each camera. Low penalties correspond to large weight and vice versa.

22 Angle Penalty penaltyang(Ci) = θi C6 C1 C5 C2 C4 Cdesired C3
Geometric Proxy θ1 θ2 θ3 θ4 θ5 θ6 C6 C1 C5 C2 C4 Cdesired C3 penaltyang(Ci) = θi

23 Resolution Penalty penaltyres(Ci) = max(0,dist(Ci) – dist(Cdesired ))
Geometric Proxy penaltyres Ci Cdesired distdesired penaltyres(Ci) = max(0,dist(Ci) – dist(Cdesired ))

24 Field-Of-View Penalty
penaltyFOV angle

25 Total Penalty penalty(Ci) = α penaltyang(i) + β penaltyres(i) +
γ penaltyfov(i)

26 K-Nearest Continuous Blending
Only use cameras with K smallest penalties C0 Continuity: contribution drops to zero as camera leaves K-nearest set w(Ci) = 1- penalty(Ci)/penalty(Ck+1st closest ) Partition of Unity: normalize w(Ci) = w(Ci)/Σw(Cj) ~ j

27 Blending Field Visualization

28 Sampling Blending Fields
Epipole and grid sampling Just epipole sampling

29 Hardware Assisted Algorithm
Sample Blending Field Select blending field sample locations for each sample location j do for each camera i do Compute penalty(i) for sample location j end for Find K smallest penalties Compute blending weights for sample location j Triangulate sample locations Render with Graphics Hardware Clear frame buffer for each camera i do Set current texture and projection matrix Copy blending weights to vertices’ alpha channel Draw triangles with non-zero alphas end for

30 Blending over one triangle
Epipole and grid sampling Just epipole sampling

31 Hardware Assisted Algorithm
Sample Blending Field Select blending field sample locations for each sample location j do for each camera i do Compute penalty(i) for sample location j end for Find K smallest penalties Compute blending weights for sample location j Triangulate sample locations Render with Graphics Hardware Clear frame buffer for each camera i do Set current texture and projection matrix Copy blending weights to vertices’ alpha channel Draw triangles with non-zero alphas end for

32 Demo

33 Future Work Optimal sampling of the camera blending field
More complete treatment of resolution effects in IBR View-dependent geometry proxies Investigation of geometry vs. images tradeoff

34 Conclusions Unstructured Lumigraph Rendering
unifies view-dependent texture mapping and lumigraph rendering methods allows rendering from unorganized images sampled camera blending field

35 Thanks to the members of the
Acknowledgements Thanks to the members of the MIT Computer Graphics Group and Microsoft Research Graphics and Computer Vision Groups DARPA ITO Grant F NSF CAREER Awards & Microsoft Research Graduate Fellowship Program Donations from Intel Corporation, Nvidia, and Microsoft Corporation


Download ppt "Unstructured Lumigraph Rendering"

Similar presentations


Ads by Google