Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.

Slides:



Advertisements
Similar presentations
Parameterized Environment Maps
Advertisements

An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Environment Mapping CSE 781 – Roger Crawfis
Interactive Deformation of Light Fields Billy Chen, Marc Levoy Stanford University Eyal Ofek, Heung-Yeung Shum Microsoft Research Asia To appear in the.
Spherical Convolution in Computer Graphics and Vision Ravi Ramamoorthi Columbia Vision and Graphics Center Columbia University SIAM Imaging Science Conference:
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Extracting Objects from Range and Radiance Images Computer Science Division University of California at Berkeley Computer Science Division University of.
Lightfields, Lumigraphs, and Image-based Rendering.
Rendering with Environment Maps Jaroslav Křivánek, KSVI, MFF UK
Unstructured Lumigraph Rendering
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 5 Brief lecture on 4D appearance functions
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 3 Ravi Ramamoorthi
Image-based Rendering of Real Objects with Complex BRDFs.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL An Incremental Weighted Least Squares Approach To Surface Light Fields Greg Coombe Anselmo Lastra.
Surface Light Fields for 3D Photography Daniel N. Wood University of Washington SIGGRAPH 2001 Course.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Estimation / Approximation Problems in 3D Photography Tom Duchamp, Department of Mathematics Werner Stuetzle, Department of Statistics University of Washington.
Computational Photography Light Field Rendering Jinxiang Chai.
Spectral Processing of Point-sampled Geometry
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
Mathematical Aspects of 3D Photography Werner Stuetzle Professor and Chair, Statistics Adjunct Professor, CSE University of Washington Previous and current.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
03/11/05© 2005 University of Wisconsin Last Time Image Based Rendering –Plenoptic function –Light Fields and Lumigraph NPR Papers: By today Projects: By.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
Virtual Environments with Light Fields and Lumigraphs CS April 2002 Presented by Mike Pilat.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Context-based Surface Completion Andrei Sharf, Marc Alexa, Daniel Cohen-Or.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Interactive Virtual Relighting and Remodelling of Real Scenes C. Loscos 1, MC. Frasson 1,2,G. Drettakis 1, B. Walter 1, X. Granier 1, P. Poulin 2 (1) iMAGIS*
Image-based rendering Michael F. Cohen Microsoft Research.
A Method for Registration of 3D Surfaces ICP Algorithm
Lightfields, Lumigraphs, and Other Image-Based Methods.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
All-Frequency Shadows Using Non-linear Wavelet Lighting Approximation Ren Ng Stanford Ravi Ramamoorthi Columbia SIGGRAPH 2003 Pat Hanrahan Stanford.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
112/5/ :54 Graphics II Image Based Rendering Session 11.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
A Multiresolution Symbolic Representation of Time Series Vasileios Megalooikonomou Qiang Wang Guo Li Christos Faloutsos Presented by Rui Li.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Dual Representations for Light Field Compression EE368C Project January 30, 2001 Peter Chou Prashant Ramanathan.
Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
1 Geometry for Game. Geometry Geometry –Position / vertex normals / vertex colors / texture coordinates Topology Topology –Primitive »Lines / triangles.
1 Real-Time High-Quality View-dependent Texture Mapping using Per-Pixel Visibility Damien Porquet Jean-Michel Dischler Djamchid Ghazanfarpour MSI Laboratory,
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Advanced Computer Graphics
Image-Based Rendering
Image Based Modeling and Rendering (PI: Malik)
Texture Mapping 고려대학교 컴퓨터 그래픽스 연구실.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Chapter 13 Image-Based Rendering
Presentation transcript:

Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle

3D Photography Goals Rendering and editing Inputs Photographs and geometry Requirements Estimation and compression

View-dependent texture mapping Debevec et al. 1996, 1998 Pulli et al. 1997

View-dependent texture mapping Debevec et al. 1996, 1998 Pulli et al. 1997

Two-plane light field Levoy and Hanrahan 1996 Gortler et al. 1996

Surface light fields Walter et al Miller et al Nishino et al. 1999

Lumisphere-valued “texture” maps Lumisphere

Overview Data acquisition Estimation and compression Rendering Editing

Overview Data acquisition Estimation and compression Rendering Editing

Scan and reconstruct geometry Reconstructed geometryRange scans (only a few shown...)

Take photographs Camera positionsPhotographs

Register photographs to geometry Geometry Photographs

Register photographs to geometry User selected correspondences (rays)

Parameterizing the geometry Base mesh Scanned geometry Map

Sample base mesh faces Base meshDetailed geometry

Assembling data lumispheres Data lumisphere

Overview Data acquisition Estimation and compression Rendering Editing

Pointwise fairing Faired lumisphere Data lumisphere

Pointwise fairing results Input photograph Pointwise faired (177 MB)

Pointwise fairing Many input data lumispheres Many faired lumispheres

Compression Small set of prototypes

Compression / Estimation Small set of prototypesMany input data lumispheres

Reflected reparameterization

Before After

Median removal + Reflected Median (“diffuse”) Median-removed (“specular”) +

Median removal Median valuesSpecularResult

Function quantization Codebook of lumispheres Input data lumisphere

Lloyd iteration Input data lumispheres

Lloyd iteration Codeword

Lloyd iteration Perturb codewords to create larger codebook

Lloyd iteration Form clusters around each codeword

Lloyd iteration Optimize codewords based on clusters

Lloyd iteration Create new clusters

Function quantization results Input photograph Function quantized (1010 codewords, 2.6 MB)

Principal function analysis Subspace of lumispheres Input data lumisphere Prototype lumisphere

Principal function analysis Approximating subspace Prototype lumisphere

Principal function analysis

Principal function analysis results Input photograph PFA compressed (Order MB)

Compression comparison Pointwise fairing (177 MB) Function quantization (2.6 MB) Principal function analysis (2.5 MB)

Comparison with 2-plane light field (uncompressed) Pointwise-faired surface light field (177 MB) Uncompressed lumigraph / light field (177 MB)

Comparison with 2-plane light field (compressed) Compressed (PFA) surface light field (2.5 MB) Vector-quantized lumigraph / light field (8.1 MB)

Overview Data acquisition Estimation and compression Rendering Editing

View-dependent level-of-detail

Render texture domain and coordinates in false color

Evaluate surface light field

Interactive renderer screen capture

Overview Data acquisition Estimation and compression Rendering Editing

Lumisphere filtering Original surface light fieldGlossier coat

Lumisphere filtering

Rotating the environment Original surface light fieldRotated environment

Deformation OriginalDeformed

Deformation

Summary 1.Estimation and compression Function quantization Principal function analysis 2.Rendering From compressed representation With view-dependent level-of-detail 3.Editing Lumisphere filtering Geometric deformations and transformations

Future work Better geometry-to-image registration More complex surfaces (mirrored, refractive, fuzzy…) under more complex illumination Derive geometry from images Combining FQ and PFA

Acknowledgements Marc Levoy and Pat Hanrahan –(Thanks for the use of the Stanford Spherical Gantry) Michael Cohen and Richard Szeliski National Science Foundation

The end

Geometry (fish) Reconstruction: 129,000 faces Memory for reconstruction: 2.5 MB Base mesh: 199 faces Re-mesh (4x subdivided): 51,000 faces Memory for re-mesh: 1 MB Memory with view-dependence: 7.5 MB

Light field data and rep (fish) Time to acquire: 1 hour Input images: 661 Raw data size: ~500 MB Lumisphere representation: 3 times subdivided octehedron Lumisphere size: 258 directions

Lumigraph (fish) Lumigraph images: 400x400 Lumigraph viewpoints per slab: 8x8 Number of slabs: 6 Lumigraph size (w/o geom): 184 MB Lumigraph VQ dimension: Lumigraph VQ codewords: 2x2x2x2x3 Compressed size (w/o geom): 8.1 MB

Compression (fish) Pointwise faired: Memory = 177 MBRMS error = 9 FQ (2000 codewords) Memory = 3.4 MBRMS error = 23 PFA (dimension 3) Memory = 2.5 MBRMS error = 24 PFA (dimension 5) Memory = 2.9 MBRMS error = ?

Pre-processing times (fish) Compute times on ~450 MHz P-III… Range scanning time: 3 hours Geometry registration: 2 hours Image to geometry alignment: 6 hours MAPS (sub-optimal): 5 hours Assembling data lumispheres: 24 hours Pointwise fairing: 30 minutes FQ codebook construction (10%): 30 hours FQ encoding: 4 hours PFA “codebook” construction (0.1%): 20 hours PFA encoding: 2 hours

Breakdown and rendering (fish) For PFA dimension 3… Direction mesh: 11 KB Normal maps: 680 KB Median maps: 680 KB Index maps: 455 KB Weight maps: 680 KB Codebook: 3 KB Geometry w/o view dependence: <1 MB Geometry w/ view dependence: 7.5 MB Rendering platform: 550 MHz PIII, linux, Mesa Rendering performance: 6-7 fps (typical)

Construct codebook using Lloyd iteration Iterate until convergence: 1.Assign all data lumispheres to closest codeword, forming clusters. 2.Compute new codeword for each cluster by “cluster-wise” fairing. Then split all codewords and start over.

Data extrapolation PhotographSurface light field

Comparison with 2-plane light field (uncompressed) Pointwise-faired surface light field (177 MB) Uncompressed 2-plane light field (177 MB)

Comparison with 2-plane light field (compressed) Principal function analysis surface light field (2.5 MB) Vector-quantized 2-plane light field (8.1 MB)

Details Input photograph Pointwise fairing (177 MB) Function quantization (3.4 MB) Principal function analysis (2.5 MB)