Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,

Slides:



Advertisements
Similar presentations
Paul Debevec, Tim Hawkins, Chris Tchou, H.P. Duiker, Westley Sarokin, and Mark Sagar Acquiring the Reflectance Field of a Human Face UC.
Advertisements

Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Computer graphics & visualization Global Illumination Effects.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Photon Tracing with Arbitrary Materials Patrick Yau.
Representations of Visual Appearance COMS 6160 [Fall 2006], Lecture 2 Ravi Ramamoorthi
IMGD 1001: Illumination by Mark Claypool
Light Readings Forsyth, Chapters 4, 6 (through 6.2) by Ted Adelson.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Measurement, Inverse Rendering COMS , Lecture 4.
Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 5 Brief lecture on 4D appearance functions
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 3 Ravi Ramamoorthi
Basic Principles of Surface Reflectance
Rendering General BSDFs and BSSDFs Randy Rauwendaal.
BSSRDF: Bidirectional Surface Scattering Reflectance Distribution Functions Jared M. Dunne C95 Adv. Graphics Feb. 7, 2002 Based on: "A Practical Model.
Computational Photography Light Field Rendering Jinxiang Chai.
Object recognition under varying illumination. Lighting changes objects appearance.
Foundations of Computer Graphics (Spring 2010) CS 184, Lecture 21: Radiosity
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
Special Applications If we assume a specific application, many image- based rendering tools can be improved –The Lumigraph assumed the special domain of.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Light and shading Source: A. Efros.
03/11/05© 2005 University of Wisconsin Last Time Image Based Rendering –Plenoptic function –Light Fields and Lumigraph NPR Papers: By today Projects: By.
1 Perception and VR MONT 104S, Spring 2008 Lecture 22 Other Graphics Considerations Review.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham.
-Global Illumination Techniques
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
Capturing light Source: A. Efros.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
Reflection models Digital Image Synthesis Yung-Yu Chuang 11/01/2005 with slides by Pat Hanrahan and Matt Pharr.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
Monte-Carlo Ray Tracing and
Local Illumination and Shading
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
CSE 681 Introduction to Ray Tracing. CSE 681 Ray Tracing Shoot a ray through each pixel; Find first object intersected by ray. Image plane Eye Compute.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Illumination Study of how different materials reflect light Definition of radiance, the fundamental unit of light transfer in computer graphics How the.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Rendering Pipeline Fall, 2015.
Computer Graphics Chapter 9 Rendering.
Image-Based Rendering
Image Based Modeling and Rendering (PI: Malik)
Illumination Model How to compute color to represent a scene
CS5500 Computer Graphics May 29, 2006
Introduction to Ray Tracing
Presentation transcript:

Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar SIGGRAPH 2000 Michelle Brooks

Goals To create realistic rendering of human faces To create realistic rendering of human faces To extrapolate a complete reflectance field from the acquired data which allows the rendering of the face from novel viewpoints To extrapolate a complete reflectance field from the acquired data which allows the rendering of the face from novel viewpoints To capture models of the face that can be rendered realistically under any illumination, from any angle and with any sort of expression. To capture models of the face that can be rendered realistically under any illumination, from any angle and with any sort of expression.

Challenges Complex and individual shape of the face Subtle and spatially varying reflectance properties of the skin ( and a lack of method for capturing these properties) Complex deformation of the face during movement.

Traditional Method Texture mapping onto a geometric model of a face Texture mapping onto a geometric model of a face Problem: Fails to look realistic under changes of lighting, viewpoint and expression Problem: Fails to look realistic under changes of lighting, viewpoint and expression

Recent Methods Skin Reflectance has been modeled using the Monte Carlo Simulation Skin Reflectance has been modeled using the Monte Carlo Simulation In early 90s – Hanrahan and Kruger developed a parameterized model for reflection from layered surfaces due to subsurface scattering, using human skin as a model. In early 90s – Hanrahan and Kruger developed a parameterized model for reflection from layered surfaces due to subsurface scattering, using human skin as a model.

And now… Reflectometry Reflectometry Reflectance Field Reflectance Field Non-Local Reflectance Field Non-Local Reflectance FieldThen… Re-illuminating Faces Re-illuminating Faces Changing the Viewpoint Changing the Viewpoint Rendering Rendering

Reflectometry Measurement of how materials reflect light Measurement of how materials reflect light –Specifically how materials transform incident illumination into radiant illumination Four-Dimensional Bi-Directional Reflectance Distribution Function (BRDF) of the material measured Four-Dimensional Bi-Directional Reflectance Distribution Function (BRDF) of the material measured BRDFs commonly represented a parameterized functions known as reflectance models. BRDFs commonly represented a parameterized functions known as reflectance models.

Reflectance Field The light field, plenoptic function and lumigraph all describe the presence of light within space The light field, plenoptic function and lumigraph all describe the presence of light within space P = (x, y, z, ,  ) P = (x, y, z, ,  )

Reflectance Field When the user is moving within unoccluded space, the light field can be described by a 4D function When the user is moving within unoccluded space, the light field can be described by a 4D function P’ = P’(u, v, ,  ) P’ = P’(u, v, ,  ) A light field parameterized in this form induces a 5D light field in the space outside of A. A light field parameterized in this form induces a 5D light field in the space outside of A. P(x, y, z, ,  ) = P’(u, v, ,  ) P(x, y, z, ,  ) = P’(u, v, ,  )

Reflectance Field Radiant light field from A under every possible incident field of illumination. Radiant light field from A under every possible incident field of illumination. 8 dimensional reflectance field function: 8 dimensional reflectance field function: R = R(Ri ; Rr) = R(ui, vi,  i,  i ; ur, vr,  r,  r) R = R(Ri ; Rr) = R(ui, vi,  i,  i ; ur, vr,  r,  r) R(ui, vi,  i,  i)  incident light field arriving at A R(ui, vi,  i,  i)  incident light field arriving at A R(ur, vr,  r,  r)  radiant light field leaving A R(ur, vr,  r,  r)  radiant light field leaving A

Non-Local Reflectance Fields Incident illumination fields originates far away from A so that Incident illumination fields originates far away from A so that –Ri(ui, vi,  i,  i) = Ri(u’i, v’i,  i,  i) for all (ui, vi, u’i, v’i) The non-local reflectance field can be represented as The non-local reflectance field can be represented as –R’ = R’(  i,  i ; ur, vr,  r,  r)

Non-Local Reflectance Fields

Re-Illuminating Faces Goal : Goal : –to capture models of faces that cane be rendered realistically under any illumination, from any angle and with any expression. –Acquire data (Light field) –Transform each facial pixel location into a reflectance function –Render the face from the original viewpoints under any novel form of illumination

Light Stage

Lights are spun around  axis continuously at 25 rpm Lights are spun around  axis continuously at 25 rpm Lights are lowered along the  axis by 180/32 degrees per revolution of  Lights are lowered along the  axis by 180/32 degrees per revolution of  Cameras capture frames continuously at 30 frames/sec which yields 64 divisions of  (64 x 32 size picture) and 32 divisions of  in approximately 1 minute. Cameras capture frames continuously at 30 frames/sec which yields 64 divisions of  (64 x 32 size picture) and 32 divisions of  in approximately 1 minute.

Constructing Reflectance Functions For each pixel location (x, y) in each camera, that location on the face is illuminated for 64 x 32 directions of  and  For each pixel location (x, y) in each camera, that location on the face is illuminated for 64 x 32 directions of  and  For each pixel a slice of the reflectance field is formed (reflectance function) For each pixel a slice of the reflectance field is formed (reflectance function) R xy ( ,  ) corresponding to the ray through the pixel.

Reflectance Functions Cont. If we let the pixel value of (x, y) in the image will illumination direction ( ,  ) be represented as: If we let the pixel value of (x, y) in the image will illumination direction ( ,  ) be represented as: –L( ,  ) (x, y) then R xy ( ,  ) = L( ,  ) (x, y) R xy ( ,  ) = L( ,  ) (x, y) Figure: mosaic of the reflectance function for a particular viewpoint Figure: mosaic of the reflectance function for a particular viewpoint

Novel Form of Illumination R xy ( ,  ) represents how much light is reflected towards the camera by pixel (x,y) as a result of the illumination from direction ( ,  ) R xy ( ,  ) represents how much light is reflected towards the camera by pixel (x,y) as a result of the illumination from direction ( ,  )

Novel Form of Illumination cont.

Gains efficiency Gains efficiency No aliasing No aliasingAlso… Clothing and Background changes Clothing and Background changes

Clothing and Background

Changing the Viewpoint We want to extrapolate complete reflectance fields from the reflectance field slices earlier acquired. We want to extrapolate complete reflectance fields from the reflectance field slices earlier acquired. This allows us to render the face from arbitrary viewpoints and also under arbitrary illumination This allows us to render the face from arbitrary viewpoints and also under arbitrary illumination

Changing the Viewpoint In order to render a face from a novel viewpoint, we must resynthesize the reflectance functions to appear as they would from the new viewpoint. In order to render a face from a novel viewpoint, we must resynthesize the reflectance functions to appear as they would from the new viewpoint. This is accomplished using a skin reflectance model which is used to guide the shifting and scaling of measured reflectance function values as the viewpoint changes. This is accomplished using a skin reflectance model which is used to guide the shifting and scaling of measured reflectance function values as the viewpoint changes.

Changing the Viewpoint The resynthesis technique requires that the captured reflectance functions be decomposed into specular and diffuse (subsurface) components. The resynthesis technique requires that the captured reflectance functions be decomposed into specular and diffuse (subsurface) components. Then, a resynthesis of a reflectance function for a viewpoint is necessary Then, a resynthesis of a reflectance function for a viewpoint is necessary Lastly, the entire face is rendered using resynthesis reflectance functions. Lastly, the entire face is rendered using resynthesis reflectance functions.

Skin Reflectance Two components : Two components : –specular –non-Lambertian

Skin Reflectance Using RCB unit vectors to represent chromaticities the diffuse chromaticity is: Using RCB unit vectors to represent chromaticities the diffuse chromaticity is: (Written on board)

Separating Specular and Subsurface Components For each pixel’s reflectance function, using a color space analysis technique For each pixel’s reflectance function, using a color space analysis technique For a reflectance function RGB value R x y( ,  ), R can be written as a linear combination of its diffuse color d, specular color s, and an error component. For a reflectance function RGB value R x y( ,  ), R can be written as a linear combination of its diffuse color d, specular color s, and an error component.

Specular and Subsurface Components Analysis assumes specular and diffuse colors are known. Analysis assumes specular and diffuse colors are known. Specular = same color as incident light Specular = same color as incident light Diffuse color changes from pixel to pixel as well as within each reflectance function Diffuse color changes from pixel to pixel as well as within each reflectance function

Finally… The final separated diffuse component is used to compute the surface normal n. The final separated diffuse component is used to compute the surface normal n. Also the diffuse albedo  d and total specular energy p s Also the diffuse albedo  d and total specular energy p s

Transforming Reflectance Functions To synthesize a reflectance function form a novel viewpoint, the diffuse and specular components are separately synthesized To synthesize a reflectance function form a novel viewpoint, the diffuse and specular components are separately synthesized Also a shadow map is created when synthesizing a new specular reflectance function to prevent a specular lobe from appearing in shadowed directions. Also a shadow map is created when synthesizing a new specular reflectance function to prevent a specular lobe from appearing in shadowed directions.

Rendering

Rendering

And Finally… Movie on Light Stage Movie on Light Stage Demonstration Demonstration