Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.

Slides:



Advertisements
Similar presentations
Fast Depth-of-Field Rendering with Surface Splatting Jaroslav Křivánek CTU Prague IRISA – INRIA Rennes Jiří Žára CTU Prague Kadi Bouatouch IRISA – INRIA.
Advertisements

Week 11 - Wednesday.  Image based effects  Skyboxes  Lightfields  Sprites  Billboards  Particle systems.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
1 Image-Based Visual Hulls Paper by Wojciech Matusik, Chris Buehler, Ramesh Raskar, Steven J. Gortler and Leonard McMillan [
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
CS 563 Advanced Topics in Computer Graphics Introduction To IBR By Cliff Lindsay Slide Show ’99 Siggraph[6]
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Photography Lesson 1 The Camera. What is Photography ? Photo- Light Graph- Drawing It means Light Drawing.... It literally means "To write with light.“
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
Digital Photography Fundamentals Rule One - all digital cameras capture information at 72 dots per inch (DPI) regardless of their total pixel count and.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Lenses Why so many lenses and which one is right for me?
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
1 CS6825: Image Formation How are images created. How are images created.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Photography Lesson 2 Pinhole Camera Lenses. The Pinhole Camera.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
Computational photography CS4670: Computer Vision Noah Snavely.
Lecture Exposure/histograms. Exposure - Four Factors A camera is just a box with a hole in it. The correct exposure is determined by four factors: 1.
PHYS 1442 – Section 004 Lecture #22-23 MW April 14-16, 2014 Dr. Andrew Brandt 1 Cameras, Film, and Digital The Human Eye; Corrective Lenses Magnifying.
Proxy Plane Fitting for Line Light Field Rendering Presented by Luv Kohli COMP238 December 17, 2002.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Aperture & Shutter Speed Digital Photography. Aperture Also called the f-stop Refers to the adjustable opening in an optical instrument, such as a camera.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Global Illumination (3) Path Tracing. Overview Light Transport Notation Path Tracing Photon Mapping.
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
CS559: Computer Graphics Lecture 36: Raytracing Li Zhang Spring 2008 Many Slides are from Hua Zhong at CUM, Paul Debevec at USC.
Distributed Ray Tracing. Can you get this with ray tracing?
Understanding Aperture (a beginner’s guide) Understanding Aperture (a beginner’s guide)
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Light Fields on the Cheap Jason Yang Final Project Presentation.
Aperture and Depth of Field
Advanced Computer Graphics
Image-Based Rendering
Media Production Richard Trombly Contact :
© 2005 University of Wisconsin
Distributed Ray Tracing
Photographic Image Formation I
Presentation transcript:

Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy 2003/4/24

Introduction Lightfield Aquisition Image Reconstruction Synthetic Aperture

Introduction Rendering cool pictures is hard Rendering them in realtime is even harder (Partial) Solution: Image-based rendering Acquire or pre-render many images At display time, recombine existing images somehow Standard sampling problems: Aliasing, acquisition, storage

Why use Image-based Rendering? Captures arbitrarily complex material/light interactions Spatially varying glossy BRDF Global, volumetric, subsurface,... Display speed independent of scene complexity Excellent for natural scenes Non-polygonal description avoids Difficulty doing sampling & LOD Cracks, watertight, manifold,...

Why not use Image-based? Must acquire images beforehand Fixed scene & lighting Often only the camera can move Predetermined sampling rate Undersampling, aliasing problems Predetermined set of views Can’t look in certain directions! Acquisition painful or expensive Must store many, many images Yet access must be quick

How do Lightfields not Work? At every point in space, take a picture (or environment map): 3D Space, 2D Images => 5D Display is just image lookup!

Why don’t Lightfields work like that? These images all contain duplicate rays, again and again 3D Space, 2D Images => 5D

How do Lightfields actually Work? We can thus get away with just one layer of cameras: 2D Cameras 2D Images => 4D Lightfield Reconstructed novel viewpoint Only assumption: Rays are unchanged along path Display means interpolating several views

Camera Array Geometry (Illustration: Isaksen, MIT)

Introduction Lightfield Aquisition Image Reconstruction Synthetic Aperture

How do you make a Lightfield? Synthetic scene Render from different viewpoints Real scene Sample from different viewpoints In either case, need Fairly dense sampling Lots of data, compression useful Good antialiasing, over both the image plane (pixels), and camera plane (apertures)

XY Motion Control Camera Mount (Isaksen, MIT)

8 USB Digital Cameras, covers removed (Jason Chang, MIT)

Lens array (bug boxes!) on a flatbed scanner (Jason Chang, MIT)

(Lightfield: Isaksen, MIT)

Introduction Lightfield Aquisition Image Reconstruction Synthetic Aperture

Lightfield Reconstruction To build a view, just look up light along each outgoing ray: Camera Array Reconstructed novel viewpoint Need both direction and camera interpolation

Two-Plane Parameterization Parameterize any ray via its intersection with two planes: Focal plane, for ray direction Camera plane May need 6 pairs of planes to capture all sides of a 3D object (Slide: Levoy & Hanrahan, Stanford)

Camera and Direction Interpolation (Slide: Levoy & Hanrahan, Stanford)

Mapping camera views to screen Can map camera view to new viewpoint using texture mapping (since everything’s linear) (Figure: Isaksen, MIT) New Camera Old Camera Focal Plane

Lightfield Reconstruction (again) To build a view, just look up light along each outgoing ray: Camera Array Reconstructed novel viewpoint Reconstruction done via graphics hardware & laws of perspective

Related: Lenticular Display Replace cameras with directional emitters, like many little lenses: Reconstructed novel viewpoint Reconstruction done in free space & laws of optics Lens array Image Optional Blockers (Isaksen)

Related: Holography A Hologram is just a sampling plane with directional emission: Reconstructed novel viewpoint Reconstruction done in free space & coherent optics Holographic film Interference patterns on film act like little diffraction gratings, and give directional emission. Reference Beam (Hanrahan)

Introduction Lightfield Aquisition Image Reconstruction Synthetic Aperture

Camera Aperture & Focus Non-pinhole cameras accept rays from a range of locations: Stuff’s in focus here Stuff’s blurry out here Lens One pixel on CCD or film

Camera Aperture Can vary effective lens size by changing physical aperture (“hole”) On a camera, this is the f-stop Small ApertureBig Aperture Not much blurring— long depth of field Lots of depth blurring—short depth of field

Synthetic Aperture Can build a larger aperture in postprocessing, by combining smaller apertures Big Assembled Aperture Note: you can assemble a big aperture out of small ones, but not split a small aperture from a big one—it’s easy to blur, but not to un-blur. Same depth blurring as with a real aperture!

Synthetic Aperture Example Vary reconstructed camera’s aperture size: a larger synthetic aperture means a shorter “depth of field”— shorter range of focused depths. (Illustration: Isaksen, MIT)

Camera Focal Distance Can vary real focal distance by changing the camera’s physical optics FarNear

Synthetic Aperture Focus With a synthetic aperture, can vary focus by varying direction Synthetic FarSynthetic Near Note: this is only works exactly in the limit of small source apertures, but works OK for finite apertures.

Synthetic Aperture Focus: Aliasing Aliasing artifacts can be caused by focal plane mismatch Synthetic FarSynthetic Near Point sampling along this plane causes aliasing artifacts Blurring along this plane due to source focal length

Variable Focal Plane Example Vary reconstructed camera’s focal length: just a matter of changing the directions before aperture assembly. (Illustration: Isaksen, MIT)

Advantages of Synthetic Aperture: Can simulate a huge aperture Impractical with a conventional camera Can even tilt focal plane Impossible with conventional optics! (Illustration: Isaksen, MIT)

Conclusions Lightfields are a unique way to represent the world Supports arbitrary light transport Equivalent to holograms & lenticular displays Isaksen et al.’s synthetic aperture technique allows lightfields to be refocused Opportunity to extract more information from lightfields