CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.

Slides:



Advertisements
Similar presentations
Parameterized Environment Maps
Advertisements

Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Modelling, Rendering and Lighting.
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Project 3 Results
Computational Photography
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Lightfields, Lumigraphs, and Image-based Rendering.
Unstructured Lumigraph Rendering
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Modeling Light : Rendering and Image Processing Alexei Efros.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
Image-based Rendering of Real Objects with Complex BRDFs.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
Computational Photography Light Field Rendering Jinxiang Chai.
Intromission Theory: Plato, Euclid, Ptolemy, da Vinci. Plato for instance, wrote in the fourth century B. C. that light emanated from the eye, seizing.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
Image-based rendering Michael F. Cohen Microsoft Research.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Modeling Light cs129: Computational Photography
What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) – is wavelength Light: Travels.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Panorama artifacts online –send your votes to Li Announcements.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
112/5/ :54 Graphics II Image Based Rendering Session 11.
CS 691B Computational Photography Instructor: Gianfranco Doretto Modeling Light.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
CS559: Computer Graphics Lecture 36: Raytracing Li Zhang Spring 2008 Many Slides are from Hua Zhong at CUM, Paul Debevec at USC.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Advanced Computer Graphics
Steps Towards the Convergence of Graphics, Vision, and Video
Image-Based Rendering
© 2005 University of Wisconsin
Image Based Modeling and Rendering (PI: Malik)
Presentation transcript:

CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai

Image-based Modeling: Challenging Scenes Why will they produce poor results? - lack of discernible features - occlusions - difficult to capture high-level structure - illumination changes - specular surfaces

Some Solutions - Use priors to constrain the solution space - Aid modeling process with minimal user interaction - Combine image-based modeling with other modeling approaches

Videos Morphable face (click here)here Image-based tree modeling (click here)here Video trace (click here)here 3D modeling by ortho-images (Click here)here

Spectrum of IBMR Images user input range scans Model Images Image based modeling Image- based rendering Geometry+ Images Light field Images + Depth Geometry+ Materials Panoroma Kinematics Dynamics Etc. Camera + geometry

Outline Layered depth image/Post-Rendering 3D WarpingPost-Rendering 3D Warping View-dependent texture mapping Light field rendering [Levoy and Hanranhan SIGGRAPH96]

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - image with depths

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - rays with colors and depths

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: (r,g,b,d) - image with depths - rays with colors and depths

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth How to deal with occlusion/visibility problem?

Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth How to deal with occlusion/visibility problem? Depth comparison

How to form LDIs Synthetic world with known geometry and texture - from multiple depth images - modified ray tracer Real images - reconstruct geometry from multiple images (e.g., voxel coloring, stereo reconstruction) - form LDIs using multiple images and reconstructed geometry Kinect sensors - record both image data and depth data

Image-based Rendering Using Kinect Sensors Capture both video/depth data using kinect sensors Using 3D warping to render an image from a novel view point [e.g., Post-Rendering 3D Warping]Post-Rendering 3D Warping

3D Warping Render an image from a novel viewpoint by warping a RGBD image. The 3D warp can expose areas of the scene for which the reference frame has no information (shown here in black).

Image-based Rendering Using Kinect Sensors Capture both video/depth data using kinect sensors Using 3D warping to render an image from a novel view point [e.g., Post-Rendering 3D Warping]Post-Rendering 3D Warping Demo: click herehere

3D Warping - A single warped frame will lack information about areas occluded in its reference frame. - Multiple reference frames can be composited to produce a more complete derived frame. How to extend to surface representation?

Outline Layered depth image/Post-Rendering 3D Warping View-dependent texture mapping Light field rendering

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent texture mapping [Debevec et al 98]

View-dependent texture mapping Subject's 3D proxy points V C 0 C 2 C 3 C 1  0  1 D  2  3 - Virtual camera at point D - Textures from camera C i mapped onto triangle faces - Blending weights in vertex V - Angle θ i is used to compute the weight values: w i = exp(-θ i 2 /2σ 2 )

Videos: View-dependent Texture Mapping Demo video

Can we render an image without any geometric information?

Outline Layered depth image/Post-Rendering 3D WarpingPost-Rendering 3D Warping View-dependent texture mapping Light field rendering [Levoy and Hanranhan SIGGRAPH96]

Light Field Rendering Video demo: click herehere

Image Plane Camera Plane Light Field Capture Rendering Light Field Rendering

Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! An image is a 2D sample of plenoptic function! P(x,y,z,θ,φ,λ,t)

Ray Let’s not worry about time and color: 5D 3D position 2D direction P(x,y,z,  )

Static objectCamera No Change in Radiance Static Lighting How can we use this?

Static objectCamera No Change in Radiance Static Lighting How can we use this?

Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen

Synthesizing novel views Assume we capture every ray in 3D space!

Synthesizing novel views

Light field / Lumigraph Outside convex space 4D Stuff Empty

Light Field How to represent rays? How to capture rays? How to use captured rays for rendering

Light Field How to represent rays? How to capture rays? How to use captured rays for rendering

Light field - Organization 2D position 2D direction s 

Light field - Organization 2D position 2 plane parameterization s u

Light field - Organization 2D position 2 plane parameterization u s t s,t u,v v s,t u,v

Light field - Organization Hold u,v constant Let s,t vary What do we get? s,tu,v

Lightfield / Lumigraph

Light field/lumigraph - Capture Idea: Move camera carefully over u,v plane Gantry >see Light field paper s,tu,v

Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement

q For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Light field/lumigraph - rendering

Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u

Ray interpolation s u Nearest neighbor Linear interpolation in S-T Quadrilinear interpolation

Light fields Advantages: No geometry needed Simpler computation vs. traditional CG Cost independent of scene complexity Cost independent of material properties and other optical effects Disadvantages: Static geometry Fixed lighting High storage cost