Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.

Slides:



Advertisements
Similar presentations
Parameterized Environment Maps
Advertisements

An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Tuesday February 19 th, 2002 Deep Shadow Maps Tom Lokovic & Eric Veach Pixar Animation Studios Presented by Tom Lechner.
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Unstructured Lumigraph Rendering
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
High-Quality Video View Interpolation
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image-Based Rendering using Hardware Accelerated Dynamic Textures Keith Yerex Dana Cobzas Martin Jagersand.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Computational Photography Light Field Rendering Jinxiang Chai.
Rendering with Concentric Mosaics Heung-Yeung Shum Li-Wei he Microsoft Research.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Image Based Rendering(IBR) Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang University, Hangzhou, China
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
CS 395: Adv. Computer Graphics Light Fields and their Approximations Jack Tumblin
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
112/5/ :54 Graphics II Image Based Rendering Session 11.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Photo VR Editor: A Panoramic and Spherical Environment Map Authoring Tool for Image-Based VR Browsers Jyh-Kuen Horng, Ming Ouhyoung Communications and.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
CS559: Computer Graphics Lecture 7: Image Warping and Panorama Li Zhang Spring 2008 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Advanced Computer Graphics
COSC579: Image Align, Mosaic, Stitch
Image-Based Rendering
© 2005 University of Wisconsin
Idea: projecting images onto a common plane
Announcements Project 2 out today (help session at end of class)
Presentation transcript:

Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan and Philipp Slusallek.

What is Image- Based Rendering? Not just using images on geometry (akin to texture mapping) Built on desire to bypass the manual modeling phase Use images (of some kind) for modeling and rendering

Types of IBR Panoramas/Image Mosaics/Light Fields, Lumigraph –QuicktimeVR –Concentric Mosaics, light fields/lumigraph View Interpolation Model based methods –Depth Images –Geometry from Images

Plenoptic Function Plenoptic function (7D) depicts light rays passing through: –center of camera at any location (x,y,z) –at any viewing angle ( ,  ) –for every wavelength ( ) –for any time ( t )

Limiting Dimensions of Plenoptic Functions Plenoptic modeling (5D) : ignore time & wavelength Lumigraph/Lightfield (4D) : constrain the scene (or the camera view) to a bounding box 2D Panorama : fix viewpoint, allow only the viewing direction and camera zoom can be changed

Limiting Dimensions of Plenoptic Functions Concentric mosaics (3D) : index all input image rays in 3 parameters: radius, rotation angle and vertical elevation

Apple’s QuickTime VR OutwardInward

Mars Pathfinder Panorama

Creating a Cylindrical Panorama From

Commercial Products –QuickTime VR, LivePicture, IBM (Panoramix) –VideoBrush –IPIX (PhotoBubbles), Be Here, etc.

Light Field and Lumigraph Take advantage of empty space to –reduce Plenoptic Function to 4D

Capturing Lightfields Need a 2D set of (2D) images Choices: –Camera motion: human vs. computer –Constraints on camera motion –Coverage and sampling uniformity –Aliasing

Point / angle Two points on a sphere Points on two planes Original images and camera positions Lightfield Parameterization

Two Plane Parametrization Object Focal plane (st) Camera plane (uv)

Reconstruction

Light Field Key Ideas: n4D function - Valid outside convex hull n2D slice = image - Insert to create - Extract to display nInward or outward

Lightfields Advantages: –Simpler computation vs. traditional CG –Cost independent of scene complexity –Cost independent of material properties and other optical effects Disadvantages: –Static geometry –Fixed lighting –High storage cost

Concentric Mosaics Concentric mosaics : easy to capture, small in storage size

Concentric Mosaics A set of manifold mosaics constructed from slit images taken by cameras rotating on concentric circles

Sample Images

Rendering a Novel View

Construction of Concentric Mosaics Synthetic scenes –uniform angular direction sampling –square root sampling in radial direction

Construction of Concentric Mosaics (2) Real scenes Bulky, costly Cheaper, easier

Construction of Concentric Mosaics (3) Problems with single camera: –Limited horizontal fov –Non-uniform spatial horizontal resolution Resampling necessary –bilinear is better than point sampling Video sequence can be compressed with VQ and entropy encoding (25X) Compressed stream gives 20fpx on PII300

Results

Results (2)

View Interpolation Sprites/Imposters with Depth –Better image warping: Wider range of reuse Backward mapping only with homograph –New mapping: Stored depth map Forward map depth map (approximate geometry) Backward mapping of color using depth information d d’

Mapping with Depth Forward Mapping: –Holes and aliasing I 1 d 1 (I 2 ) I 2 

Mapping with Depth Backward Mapping: –What is d? I 1 (I 2 ) d 2 I 2 

Mapping with Depth Solution: –Forward map depth –Reconstruct approximate geometry –Backward map color I 1 (I 2 ) d 2 I 2 

Layered Depth Images Idea: –Handle disocclusion –Store invisible geometry in depth images Data structure: –Per pixel list of depth samples –Per depth sample: RGBA Z Encoded: Normal direction, distance –Pack into cache lines

Layered Depth Images Computation: –Incremental warping computation –Implicit ordering information Process in up to four quadrant –Splat size computation Table lookup Fixed splat templates –Clipping of LDIs

Layered Depth Images

Model-based IBR Basic Idea: Sparse set of images [Debevec’97, Pulli’96] Overview: –Approximate Modeling Photogrammetric modeling Triangulated depth maps –View-dependent Texture Mapping Weighting Hardware accelerated rendering –Model-based Stereo Details from stereo algorithms

Hybrid Approach Courtesy: P. Debevec

Approximate Modeling User-assisted photogrammetry [Debevec ‘97]: –Based on “Structure from Motion” –Use constraints in architectural models Approach: –Simple block model –Constraints reduce DOF –Matching based on lines –Non-linear optimization –Initial Camera Positions

Approximate Modeling: Block Model Courtesy: P. Debevec

Approximate Modeling Active Light: –Calibrated camera and projector –Plane of light and triangulation –Registration of multiple views –Triangulation of point cloud Projector Camera

Approximate Modeling

Projecting Images Technique: –Known camera positions –Projective texture mapping –Shadow buffer for occlusions –Blending between textures –Filling in

Visibility

Projecting Images

Simple compositing vs. blending Blending: –Select “best” image closeness to viewing direction distance to border sampling density [Pulli] deletion of features Some computation in HW –Smooth transition between pixels and frames Alpha blending, soft Z-buffer, confidence

Projecting Images Closeness to viewing direction: –Triangulate the Hemisphere Delaunay triangulation of viewing directions Regular triangulation: label each vertex with best view –Interpolate based on barycentric coordinates

Blending of Textures

Model-Based Stereo Problems with conventional stereo algorithms: –Correspondences are difficult to find –Large disparities –Foreshortening, projective distortions Approach: –Use approximate geometry to reproject one image –Compute disparity of warped image Significant smaller disparity and foreshortening

Model-Based Stereo

Demos