Unstructured Lumigraph Rendering

Slides:



Advertisements
Similar presentations
Signal-Specialized Parametrization Microsoft Research 1 Harvard University 2 Microsoft Research 1 Harvard University 2 Steven J. Gortler 2 Hugues Hoppe.
Advertisements

Interactive Deformation of Light Fields Billy Chen Eyal Ofek Heung-Yeung Shum Marc Levoy.
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Lightfields, Lumigraphs, and Image-based Rendering.
Real-Time Rendering SPEACIAL EFFECTS Lecture 03 Marina Gavrilova.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
1 Combining Approximate Geometry with VDTM – A Hybrid Approach to 3D Video Teleconferencing Celso Kurashima 2, Ruigang Yang 1, Anselmo Lastra 1 1 Department.
1 Image-Based Visual Hulls Paper by Wojciech Matusik, Chris Buehler, Ramesh Raskar, Steven J. Gortler and Leonard McMillan [
Image-Based Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Wojciech Matusik Chris Buehler Leonard McMillan Massachusetts Institute of Technology.
Direct Volume Rendering Joe Michael Kniss Scientific Computing and Imaging Institute University of Utah.
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
High-Quality Video View Interpolation
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
Approximate Soft Shadows on Arbitrary Surfaces using Penumbra Wedges Tomas Akenine-Möller Ulf Assarsson Department of Computer Engineering, Chalmers University.
Image-based Rendering of Real Objects with Complex BRDFs.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL An Incremental Weighted Least Squares Approach To Surface Light Fields Greg Coombe Anselmo Lastra.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
Appearance modeling: textures and IBR Class 17. 3D photography course schedule Introduction Aug 24, 26(no course) Aug.31,Sep.2(no course) Sep. 7, 9(no.
Computational Photography Light Field Rendering Jinxiang Chai.
Rendering with Concentric Mosaics Heung-Yeung Shum Li-Wei he Microsoft Research.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
CS 563 Advanced Topics in Computer Graphics Introduction To IBR By Cliff Lindsay Slide Show ’99 Siggraph[6]
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Image-Based Visual Hulls Wojciech Matusik Chris Buehler Ramesh Raskar Steven Gortler Leonard McMillan Presentation by: Kenton McHenry.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Interactive Rendering of Meso-structure Surface Details using Semi-transparent 3D Textures Vision, Modeling, Visualization Erlangen, Germany November 16-18,
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Chapter 6 Special Effects 강 신 진강 신 진
Proxy Plane Fitting for Line Light Field Rendering Presented by Luv Kohli COMP238 December 17, 2002.
Image-based rendering Michael F. Cohen Microsoft Research.
Lightfields, Lumigraphs, and Other Image-Based Methods.
1 Plenoptic Imaging Chong Chen Dan Schonfeld Department of Electrical and Computer Engineering University of Illinois at Chicago May
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Rendering Fake Soft Shadows with Smoothies Eric Chan Massachusetts Institute of Technology.
Hardware-accelerated Rendering of Antialiased Shadows With Shadow Maps Stefan Brabec and Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken,
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
112/5/ :54 Graphics II Image Based Rendering Session 11.
Ramesh Raskar University of North Carolina at Chapel Hill Ramesh Raskar University of North Carolina at Chapel Hill Image Precision Silhouette Edges Michael.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Geometry Synthesis Ares Lagae Olivier Dumont Philip Dutré Department of Computer Science Katholieke Universiteit Leuven 10 August, 2004.
Graphics, Modeling, and Textures Computer Game Design and Development.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
1 Real-Time High-Quality View-dependent Texture Mapping using Per-Pixel Visibility Damien Porquet Jean-Michel Dischler Djamchid Ghazanfarpour MSI Laboratory,
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Advanced Computer Graphics
Real-Time Soft Shadows with Adaptive Light Source Sampling
Image-Based Rendering
View-Dependent Textured Splatting for Rendering Live Scenes
Graphics, Modeling, and Textures
Image Based Modeling and Rendering (PI: Malik)
Presentation transcript:

Unstructured Lumigraph Rendering Chris Buehler Michael Bosse Leonard McMillan MIT-LCS Steven J. Gortler Harvard University Michael F. Cohen Microsoft Research

The Image-Based Rendering Problem Synthesize novel views from reference images Static scenes, fixed lighting Flexible geometry and camera configurations In this paper, we’re interested in solving the IBR problem: given a set of reference images, we’d like to synthesize new views of a scene. We make the assumptions that the scene is static and the lighting is fixed. Further, we’d like to be able to do this with few constraints on the position or number of input images. For example, we’d like to take a video sequence like this one and render new views from it.

The ULR Algorithm Designed to work over a range of image and geometry configurations Designed to satisfy desirable properties ULR LF # of Images Previous image-based rendering approaches, such as light field rendering and vdtm, have been designed to work with either lots of input images or a high level of geometric detail. Our algorithm, on the other hand, works with a range of image and geometry configurations. To achieve this flexibility, we designed our algorithm to satisfy certain desired properties. I’ll now describe those properties while reviewing some of the previous work. Now there have been many previous algorithms that solve the image-based rendering problem. However, most of them have been designed to work either lots of images and low geometric fidelity, or with few images and high geometric fidelity. Our algorithm, on the other hand, has been designed to work well over a range of inputs. In developing this algorithm, we extracted what we felt were the most desirable properties from previous algorithms. Now I’ll take a moment to describe these properties. In this talk, I am going to describe our approach to this rendering probem, unlike previous methoids, which were designed to work with either lots of images or good geometric knowledge, (clikck) we hav designe our alogithm to work well over a range of inputs. Not surprisingly, our algorithm borrows heavily from much of the previous work. In order to motiviate which elements to borrow, we first explore some of the desireable prorties found in them VDTM Geometric Fidelity

“Light Field Rendering,” SIGGRAPH ‘96 u s u0 s0 A Light field is a 4D representation of the colors along all rays in a volume. Typically, rays are parameterized according to their intersection with two parallel planes. Since I’m working in two dimensions here, I can parameterize a ray by it’s intersections with two lines. Then, to reconstruct a desired view, I can just intersect all viewing rays with these lines and look up the corresponding colors. In practice, we have discrete light fields, which can be thought of of an array of cameras arranged in a regular grid. Rays that fall in between the cameras are interpolated from nearby cameras. Note that , in general, this interpolation depends on the position of the U plane. However, some rays are special, and can be interpolated without regard for the U plane. These are the rays that are directly seen by the cameras. Desired Camera Desired color interpolated from “nearest cameras”

“Light Field Rendering,” SIGGRAPH ‘96 u s Desired Property #1: Epipole consistency A Light field is a 4D representation of the radiance along all rays in a volume. Typically, rays are parameterized according to their intersection with two parallel 3D planes. Here I’ll consider a simple example in the plane, where rays are represented by their intersections with 2 lines. Let’s consider a virtual camera /* uv,st labels on the planes (lines)? */ Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” u Potential Artifact Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Property #2: Use of geometric proxy Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Camera

Note: all images are resampled. “The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Property #3: Unstructured input images Rebinning Note: all images are resampled. Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Property #4: Real-time implementation Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98 “The Scene” Occluded Out of view Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98 “The Scene” Desired Property #5: Continuous reconstruction Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98 “The Scene” θ1 θ3 θ2 Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98 “The Scene” θ1 θ3 Desired Property #6: Angles measured w.r.t. proxy θ2 Desired Camera

“The Scene” Desired Camera

“The Scene” Desired Property #7: Resolution sensitivity Desired Camera

Previous Work Light fields and Lumigraphs Levoy and Hanrahan, Gortler et al., Isaksen et al. View-dependent Texture Mapping Debevec et al., Wood et al. Plenoptic Modeling w/Hand-held Cameras Heigl et al. Many others… I’ve covered lfs, lumis, and vdtm is some detail. Of course, there are many other IBR algorithms. However, none of them quite satisfies all of the properties that I’ve outlined previously.

Unstructured Lumigraph Rendering Epipole consistency Use of geometric proxy Unstructured input Real-time implementation Continuous reconstruction Angles measured w.r.t. proxy Resolution sensitivity Unstructured Lumigraph Rendering, on the other hand, addresses all of these properties in a single algorithm. I should point out that it’s

Blending Fields colordesired = Σ wi colori i Desired Camera In order to describe our approach, I will first describe a little something we call the blending field. Desired Camera colordesired = Σ wi colori i

Blending Fields colordesired = Σ w(ci) colori i Desired Camera In order to describe our approach, I will first describe a little something we call the blending field. Desired Camera colordesired = Σ w(ci) colori i

Unstructured Lumigraph Rendering Explicitly construct blending field Computed using penalties Sample and interpolate over desired image Render with hardware Projective texture mapping and alpha blending You can compute blending fields for any image-based rendering algorithm. ULR is unique in that it explicitly computes and uses the blending field for rendering. For efficient rendering, we sample the blending field and interpolate over the desired image plane. We can then render the lumigraph using projective texture mapping and alpha blending. We compute the blending field based on penalties that are assigned to each camera. Low penalties correspond to large weight and vice versa.

Angle Penalty penaltyang(Ci) = θi C6 C1 C5 C2 C4 Cdesired C3 Geometric Proxy θ1 θ2 θ3 θ4 θ5 θ6 C6 C1 C5 C2 C4 Cdesired C3 penaltyang(Ci) = θi

Resolution Penalty penaltyres(Ci) = max(0,dist(Ci) – dist(Cdesired )) Geometric Proxy penaltyres Ci Cdesired distdesired penaltyres(Ci) = max(0,dist(Ci) – dist(Cdesired ))

Field-Of-View Penalty penaltyFOV angle

Total Penalty penalty(Ci) = α penaltyang(i) + β penaltyres(i) + γ penaltyfov(i)

K-Nearest Continuous Blending Only use cameras with K smallest penalties C0 Continuity: contribution drops to zero as camera leaves K-nearest set w(Ci) = 1- penalty(Ci)/penalty(Ck+1st closest ) Partition of Unity: normalize w(Ci) = w(Ci)/Σw(Cj) ~ j

Blending Field Visualization

Sampling Blending Fields Epipole and grid sampling Just epipole sampling

Hardware Assisted Algorithm Sample Blending Field Select blending field sample locations for each sample location j do for each camera i do Compute penalty(i) for sample location j end for Find K smallest penalties Compute blending weights for sample location j Triangulate sample locations Render with Graphics Hardware Clear frame buffer for each camera i do Set current texture and projection matrix Copy blending weights to vertices’ alpha channel Draw triangles with non-zero alphas end for

Blending over one triangle Epipole and grid sampling Just epipole sampling

Hardware Assisted Algorithm Sample Blending Field Select blending field sample locations for each sample location j do for each camera i do Compute penalty(i) for sample location j end for Find K smallest penalties Compute blending weights for sample location j Triangulate sample locations Render with Graphics Hardware Clear frame buffer for each camera i do Set current texture and projection matrix Copy blending weights to vertices’ alpha channel Draw triangles with non-zero alphas end for

Demo

Future Work Optimal sampling of the camera blending field More complete treatment of resolution effects in IBR View-dependent geometry proxies Investigation of geometry vs. images tradeoff

Conclusions Unstructured Lumigraph Rendering unifies view-dependent texture mapping and lumigraph rendering methods allows rendering from unorganized images sampled camera blending field

Thanks to the members of the Acknowledgements Thanks to the members of the MIT Computer Graphics Group and Microsoft Research Graphics and Computer Vision Groups DARPA ITO Grant F30602-971-0283 NSF CAREER Awards 9875859 & 9703399 Microsoft Research Graduate Fellowship Program Donations from Intel Corporation, Nvidia, and Microsoft Corporation