Presentation is loading. Please wait.

Presentation is loading. Please wait.

Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,

Similar presentations


Presentation on theme: "Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,"— Presentation transcript:

1 Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar SIGGRAPH 2000 Michelle Brooks

2 Goals To create realistic rendering of human faces To create realistic rendering of human faces To extrapolate a complete reflectance field from the acquired data which allows the rendering of the face from novel viewpoints To extrapolate a complete reflectance field from the acquired data which allows the rendering of the face from novel viewpoints To capture models of the face that can be rendered realistically under any illumination, from any angle and with any sort of expression. To capture models of the face that can be rendered realistically under any illumination, from any angle and with any sort of expression.

3 Challenges Complex and individual shape of the face Subtle and spatially varying reflectance properties of the skin ( and a lack of method for capturing these properties) Complex deformation of the face during movement.

4 Traditional Method Texture mapping onto a geometric model of a face Texture mapping onto a geometric model of a face Problem: Fails to look realistic under changes of lighting, viewpoint and expression Problem: Fails to look realistic under changes of lighting, viewpoint and expression

5 Recent Methods Skin Reflectance has been modeled using the Monte Carlo Simulation Skin Reflectance has been modeled using the Monte Carlo Simulation In early 90s – Hanrahan and Kruger developed a parameterized model for reflection from layered surfaces due to subsurface scattering, using human skin as a model. In early 90s – Hanrahan and Kruger developed a parameterized model for reflection from layered surfaces due to subsurface scattering, using human skin as a model.

6 And now… Reflectometry Reflectometry Reflectance Field Reflectance Field Non-Local Reflectance Field Non-Local Reflectance FieldThen… Re-illuminating Faces Re-illuminating Faces Changing the Viewpoint Changing the Viewpoint Rendering Rendering

7 Reflectometry Measurement of how materials reflect light Measurement of how materials reflect light –Specifically how materials transform incident illumination into radiant illumination Four-Dimensional Bi-Directional Reflectance Distribution Function (BRDF) of the material measured Four-Dimensional Bi-Directional Reflectance Distribution Function (BRDF) of the material measured BRDFs commonly represented a parameterized functions known as reflectance models. BRDFs commonly represented a parameterized functions known as reflectance models.

8 Reflectance Field The light field, plenoptic function and lumigraph all describe the presence of light within space The light field, plenoptic function and lumigraph all describe the presence of light within space P = (x, y, z, ,  ) P = (x, y, z, ,  )

9 Reflectance Field When the user is moving within unoccluded space, the light field can be described by a 4D function When the user is moving within unoccluded space, the light field can be described by a 4D function P’ = P’(u, v, ,  ) P’ = P’(u, v, ,  ) A light field parameterized in this form induces a 5D light field in the space outside of A. A light field parameterized in this form induces a 5D light field in the space outside of A. P(x, y, z, ,  ) = P’(u, v, ,  ) P(x, y, z, ,  ) = P’(u, v, ,  )

10 Reflectance Field Radiant light field from A under every possible incident field of illumination. Radiant light field from A under every possible incident field of illumination. 8 dimensional reflectance field function: 8 dimensional reflectance field function: R = R(Ri ; Rr) = R(ui, vi,  i,  i ; ur, vr,  r,  r) R = R(Ri ; Rr) = R(ui, vi,  i,  i ; ur, vr,  r,  r) R(ui, vi,  i,  i)  incident light field arriving at A R(ui, vi,  i,  i)  incident light field arriving at A R(ur, vr,  r,  r)  radiant light field leaving A R(ur, vr,  r,  r)  radiant light field leaving A

11 Non-Local Reflectance Fields Incident illumination fields originates far away from A so that Incident illumination fields originates far away from A so that –Ri(ui, vi,  i,  i) = Ri(u’i, v’i,  i,  i) for all (ui, vi, u’i, v’i) The non-local reflectance field can be represented as The non-local reflectance field can be represented as –R’ = R’(  i,  i ; ur, vr,  r,  r)

12 Non-Local Reflectance Fields

13 Re-Illuminating Faces Goal : Goal : –to capture models of faces that cane be rendered realistically under any illumination, from any angle and with any expression. –Acquire data (Light field) –Transform each facial pixel location into a reflectance function –Render the face from the original viewpoints under any novel form of illumination

14 Light Stage

15 Lights are spun around  axis continuously at 25 rpm Lights are spun around  axis continuously at 25 rpm Lights are lowered along the  axis by 180/32 degrees per revolution of  Lights are lowered along the  axis by 180/32 degrees per revolution of  Cameras capture frames continuously at 30 frames/sec which yields 64 divisions of  (64 x 32 size picture) and 32 divisions of  in approximately 1 minute. Cameras capture frames continuously at 30 frames/sec which yields 64 divisions of  (64 x 32 size picture) and 32 divisions of  in approximately 1 minute.

16 Constructing Reflectance Functions For each pixel location (x, y) in each camera, that location on the face is illuminated for 64 x 32 directions of  and  For each pixel location (x, y) in each camera, that location on the face is illuminated for 64 x 32 directions of  and  For each pixel a slice of the reflectance field is formed (reflectance function) For each pixel a slice of the reflectance field is formed (reflectance function) R xy ( ,  ) corresponding to the ray through the pixel.

17 Reflectance Functions Cont. If we let the pixel value of (x, y) in the image will illumination direction ( ,  ) be represented as: If we let the pixel value of (x, y) in the image will illumination direction ( ,  ) be represented as: –L( ,  ) (x, y) then R xy ( ,  ) = L( ,  ) (x, y) R xy ( ,  ) = L( ,  ) (x, y) Figure: mosaic of the reflectance function for a particular viewpoint Figure: mosaic of the reflectance function for a particular viewpoint

18 Novel Form of Illumination R xy ( ,  ) represents how much light is reflected towards the camera by pixel (x,y) as a result of the illumination from direction ( ,  ) R xy ( ,  ) represents how much light is reflected towards the camera by pixel (x,y) as a result of the illumination from direction ( ,  )

19 Novel Form of Illumination cont.

20 Gains efficiency Gains efficiency No aliasing No aliasingAlso… Clothing and Background changes Clothing and Background changes

21 Clothing and Background

22 Changing the Viewpoint We want to extrapolate complete reflectance fields from the reflectance field slices earlier acquired. We want to extrapolate complete reflectance fields from the reflectance field slices earlier acquired. This allows us to render the face from arbitrary viewpoints and also under arbitrary illumination This allows us to render the face from arbitrary viewpoints and also under arbitrary illumination

23 Changing the Viewpoint In order to render a face from a novel viewpoint, we must resynthesize the reflectance functions to appear as they would from the new viewpoint. In order to render a face from a novel viewpoint, we must resynthesize the reflectance functions to appear as they would from the new viewpoint. This is accomplished using a skin reflectance model which is used to guide the shifting and scaling of measured reflectance function values as the viewpoint changes. This is accomplished using a skin reflectance model which is used to guide the shifting and scaling of measured reflectance function values as the viewpoint changes.

24 Changing the Viewpoint The resynthesis technique requires that the captured reflectance functions be decomposed into specular and diffuse (subsurface) components. The resynthesis technique requires that the captured reflectance functions be decomposed into specular and diffuse (subsurface) components. Then, a resynthesis of a reflectance function for a viewpoint is necessary Then, a resynthesis of a reflectance function for a viewpoint is necessary Lastly, the entire face is rendered using resynthesis reflectance functions. Lastly, the entire face is rendered using resynthesis reflectance functions.

25 Skin Reflectance Two components : Two components : –specular –non-Lambertian

26 Skin Reflectance Using RCB unit vectors to represent chromaticities the diffuse chromaticity is: Using RCB unit vectors to represent chromaticities the diffuse chromaticity is: (Written on board)

27 Separating Specular and Subsurface Components For each pixel’s reflectance function, using a color space analysis technique For each pixel’s reflectance function, using a color space analysis technique For a reflectance function RGB value R x y( ,  ), R can be written as a linear combination of its diffuse color d, specular color s, and an error component. For a reflectance function RGB value R x y( ,  ), R can be written as a linear combination of its diffuse color d, specular color s, and an error component.

28 Specular and Subsurface Components Analysis assumes specular and diffuse colors are known. Analysis assumes specular and diffuse colors are known. Specular = same color as incident light Specular = same color as incident light Diffuse color changes from pixel to pixel as well as within each reflectance function Diffuse color changes from pixel to pixel as well as within each reflectance function

29 Finally… The final separated diffuse component is used to compute the surface normal n. The final separated diffuse component is used to compute the surface normal n. Also the diffuse albedo  d and total specular energy p s Also the diffuse albedo  d and total specular energy p s

30 Transforming Reflectance Functions To synthesize a reflectance function form a novel viewpoint, the diffuse and specular components are separately synthesized To synthesize a reflectance function form a novel viewpoint, the diffuse and specular components are separately synthesized Also a shadow map is created when synthesizing a new specular reflectance function to prevent a specular lobe from appearing in shadowed directions. Also a shadow map is created when synthesizing a new specular reflectance function to prevent a specular lobe from appearing in shadowed directions.

31 Rendering

32 Rendering

33 And Finally… Movie on Light Stage Movie on Light Stage Demonstration Demonstration


Download ppt "Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,"

Similar presentations


Ads by Google