Download presentation
Presentation is loading. Please wait.
PublishSonia Cabreira Santos Modified over 6 years ago
1
Polarization-based Inverse Rendering from Single View
Daisuke Miyazaki Robby T. Tan Kenji Hara Katsushi Ikeuchi Good afternoon. My name is Daisuke Miyazaki from The University of Tokyo, Japan. I would like to give a talk with a title “polarization-based inverse rendering from single view”.
2
Modeling cultural assets
Geometrical Photometrical Environmental For creating a virtual-reality model of cultural assets, generally we have to obtain three types of information. Geometrical information, photometrical information, and environmental information. Today, I will talk about the method of modeling these three types of information in one integrated framework. This method requires only inexpensive equipments, so it is considered to be applied to a various types of applications including the preservation of cultural assets. The method estimates the shape of the object, surface reflection parameters such as surface roughness, and illumination distribution. Integrated framework for obtaining 3 types of information
3
Related work Geometry Photometry Environment Tominaga et.al. 2000
Zheng et.al. 1991 Nayar et.al. 1996 Sato et.al. 1999 Ramamoorthi et.al. 2001 Nishino et.al. 2001 Hara et.al. 2002 Proposed method Many systems are proposed to obtain these three types of information with less constraint. Tominaga proposed a method to estimate geometrical information and photometrical information by shape-from-shading approach. Zheng and Nayar estimated all three types of information except for the specular reflection parameters. Sato, Ramamoorthi, Nishino, Hara estimated photometrical information and environmental information. Today, I will talk about a method to estimate geometrical information, photometrical information, and environmental information in one integrated framework.
4
Outline 1. Reflection components separation
2. Shape from polarization using diffuse light 3. Light source estimation from intensity peak (click) First, we separate the input image into two component images. Those images are of diffuse reflection component and specular reflection component. (click) Next, by analyzing the polarization state of the diffusely reflected light, we estimate the shape of the object. (click) Then, the direction of the light sources is estimated from the position of the intensity peak in specular component image. (click) Finally, we compute the surface reflection parameters by solving the least-square problem. Minimize Ks, σ rendered image real 2 4. Reflection parameters estimation by l.s.m.
5
1. Reflection components separation
6
Dichromatic reflection model
Specularly reflected light Diffusely reflected light Incident light Surface normal Air Object (click) The light which reflects directly at the object surface is called specularly reflected light, or surface reflection light. (click) Some of the light penetrates into the object, randomly reflects at pigments inside the object, and finally transmits out into the air. This kind of reflection is called diffuse reflection, or body reflection. The characteristics of these two kinds of reflection are different. Thus, we have to separate the input image into the image of specular reflection component and the image of diffuse reflection component.
7
Reflection components separation
[Tan2002] Input Diffuse Specular Such dichromatic reflection model indicates that we can separate the input image into two component images: the images of diffuse reflection and specular reflection. We use Tan’s method for this purpose. The next speaker will explain the detail of this method. This slide shows the separation result of green hemisphere. The input image is separated into diffuse component image and specular component image. Diffuse component image is used to estimate the shape of the object. Specular component image is used to estimate the illumination distribution and reflection parameters. Illumination Reflection parameters Shape
8
2. Shape from polarization
9
Related work Object Reflection View Koshikawa 1979 Opaque Specular 1
Wolff 1990 Diffuse 2 Rahmann et.al. 2001 2~5 Miyazaki et.al. 2002 Transparent Proposed method Polarization is said to be useful to estimate the shape of the object. Koshikawa and Wolff proposed a method to measure the shape of an object. But, for those two methods, the shape of the object is obtained only for a restricted case. Rahmann proposed a method to obtain the shape of an opaque[oupeik] object by analyzing the polarization phenomena of the diffusely reflected light. We previously proposed a method to obtain the shape of a transparent object by observing the specularly reflected light. Those methods need to observe an object from 2 or more directions. The method in this presentation observes the diffusely reflected light only from one direction and obtains the surface shape of an opaque[oupeik] object.
10
Polarization Specularly reflected light Diffusely reflected light
Incident light Air Object Due to the dichromatic reflection model, there are two kinds of light caused by an object surface, (click) specularly reflected light and (click) diffusely reflected light. Suppose we illuminate the object by unpolarized light. (click) Then, the specularly reflected light will be partially polarized. Note that the transmitted light will be also partially polarized. (click) Some of the incident light penetrates into the object, randomly reflects at the pigments inside the object, and will be depolarized. Then, such unpolarized light will again be partially polarized when the light emits out into the air. The polarization state of such diffusely reflected light depends on the surface normal. So, we analyze the polarization state of diffusely reflected light to obtain the surface normal of the object.
11
Surface normal Surface normal Camera Polarizer Zenith angle q
We put the camera above the object. We set the linear polarizer in front of the camera. Surface normal can be represented in polar coordinate system: zenith angle theta and azimuth angle phi. By setting the north pole of Gaussian sphere as the viewing direction, zenith angle will be the angle between surface normal and viewing direction. And, azimuth angle is the orientation of the plane consisted of surface normal and viewing direction. I will explain how to calculate the azimuth angle phi in the next two slides, and then, I will explain how to calculate the zenith angle theta for another two slides. Azimuth angle Object
12
Azimuth angleφ and intensity difference
255 Imax Intensity Imin Rotation angle of polarizer 1 2 360 The intensity observed by the camera changes sinusoidally while rotating the polarizer. (click) We denote the minimum intensity as Imin and the maximum intensity as Imax. (click) The angle where Imax is observed will be the azimuth angle phi. Since the linear polarizer has a cycle of 180 degrees, two candidates of azimuth angle will appear. One is correct and the other is wrong. We have to solve this ambiguity problem to obtain the true azimuth angle. Azimuth angle -ambiguity
13
Determination of azimuth angle
Propagation object Determination of azimuth angle Propagate φ from occluding boundary to inner part of object area (Assumption: smooth surface) The disambiguation process of azimuth angle phi is shown in this slide. We assume the object has a closed smooth surface. So, we know the surface normal of occluding boundary of the object. By propagating the determination of azimuth angle from occluding boundary to the inner part of the object area, we can determine the azimuth angle of all points on object surface. This determination process cannot be applied to the surface which has a perfectly concave part, that is, the place which is concave at any directions. [Ikeuchi&Horn1981] Cannot apply to “dimples”(=perfect concave)
14
Zenith angleθ and DOPρ 1 DOP ρ Degree Of Polarization ρ Zenith angle θ
Next, we calculate the zenith angle. DOP, degree of polarization, represents how much the light has been polarized. DOP is 1 for perfectly polarized light and DOP is 0 for unpolarized light. (click) DOP is calculated from the intensity observed by the polarization camera. (click) The relation between the zenith angle theta and the DOP rho, is shown in this equation and the red curve. (click) We calculate the zenith angle from DOP with this equation. Zenith angle θ θ 90°
15
u Modification 0.5 DOP ρ u: modification factor Degree Raises DOP Of
Definition of DOP: Modified DOP: u DOP ρ Degree Of Polarization u: modification factor Raises DOP Assumption Closed smooth object “u” is constant (click) As I said in the previous slide, this equation represents the relation between the zenith angle and the DOP, degree of polarization. However, this equation is for a smooth surface. (click) For a rough surface, DOP will be lowered. So, we modify the input data and raise the DOP. (click) The definition of DOP is expressed as this equation. Then, we modify this equation like this. (click) We subtract a certain value “u” from the denominator of DOP. From the surface normal of the occluding boundary, we can estimate the value of “u”. We also assume that the estimated value “u” can be applied not only for the occluding boundary but also for all points on object surface. Zenith angle θ 90°
16
φ θ Surface normal Surface normal
From the method explained in the previous slides, we obtain the surface normal of the object. The right picture is the needle diagram of the estimated surface normal of the hemisphere object. Surface normal
17
Height Relaxation method Minimize: where, Surface normal Gradient
[Ikeuchi1984] Minimize: where, Surface normal Gradient Height H Iteratively update: The surface normal is integrated into a height profile by relaxation method. (click) This method calculates the height by minimizing this equation. (click) Here, height is denoted as “H”, and gradients are denoted as “p” and “q”. From the estimated surface normal, the gradients can be calculated. (click) The solution to this minimization problem is like this equation. By iteratively calculating this equation, we obtain the height of the object surface.
18
3. Illumination estimation
Next, I will talk about the illumination estimation.
19
Illumination sphere Light source is represented in polar coordinate system (θ, φ) θ=0° L2=(θ2, φ2) L1=(θ1, φ1) L3=(θ3, φ3) φ=90° θ=90° θ=90° φ=0° φ=180° Object φ=270° Suppose the object is in the origin of an unit sphere. (click) Then, the directional light can be represented as a point on the sphere, that is, the direction of the light source is expressed in polar coordinate system theta (click,wait) and phi (click). Each light sources can be expressed as these two variables, theta and phi. (click,wait) We will estimate the direction of each directional light. θ=180°
20
Illumination estimation
Detect position of intensity peak Determine light source orientation from the peak 1.Project to (θ, φ)-space 2.Thresholding 3.Detect intensity peak From the specular component image and the surface shape of the object, we can estimate the direction of the light sources. First, we project the intensity of specular component image onto the (theta, phi)-space, which represents the direction of the light source. Intensity of each surface points is projected in a perfect mirror direction from the camera with respect to the surface normal. After that, we detect the specular region with thresholding the projected image. Finally, we determine the direction of the light source from the intensity peak in each regions. Here, we assume that the object is not a deep-concave shape.
21
4. Reflection parameters estimation
Next, I will talk about the reflection parameters estimation.
22
Torrance-Sparrow reflection model
Incident light Surface normal View Bisector Object surface α θi θr Diffuse reflection Specular reflection Unknown: Diffuse reflection scale; Kd Specular reflection scale; Ks Surface roughness; σ For a diffuse reflection, we use Lambertian reflection model, and for a specular reflection, we use Torrance-Sparrow reflection model. The observed intensity is a sum of (click) diffuse reflection intensity and specular reflection intensity. (click) Angles are calculated from the estimated surface normal and illumination direction. (click) Unknown parameters are diffuse reflection scale, Kd, specular reflection scale, Ks, and, surface roughness, sigma. Since we already separated the input image into diffuse and specular component images, we can easily calculate the value of “Kd”. So, we have to estimate the value of “Ks” and “sigma” from the specular component image. Known: θi, θr, α
23
Reflection parameters estimation
Solve the following least-square problem by steepest-descent method Minimize Ks, σ rendered image real 2 Specular reflection scale, Ks, and surface roughness, sigma, is estimated by solving this least-square problem. We minimize the error between the real image and the rendered image. The real image is the specular component image, and the rendered image is calculated from the value of “Ks” and “sigma” estimated at each loop. Minimization is achieved by steepest-descent method.
24
Experimental result Here is the experimental result.
25
Input Azimuth angleφ DOPρ Intensity I
These images represent the input data. We obtain the intensity, azimuth angle, and DOP of the object surface. These data are of green hemisphere object. DOPρ Intensity I
26
Result of shape estimation
Then, we apply our method to those input data, and obtain the shape of the object. Here is the result of a hemisphere object.
27
Result of illumination estimation
After estimating the shape of the object, we estimate the illumination distribution. The estimated illumination is shown in the right, and the actual illumination is shown in the left. Estimated illumination distribution Actual illumination distribution
28
Rendered image under different illumination & view
Rendering result Input Synthesized image The upper left image is an input intensity image. The lower left image is a synthesized image under the estimated illumination distribution and reflection parameters. The right image is a synthesized image under different illumination distribution and viewing direction. Rendered image under different illumination & view
29
Result for another object
Estimated shape Input Here is an another result for a yellowish pear object. The upper left image is an input intensity image. The lower left image is a synthesized image under the estimated illumination distribution and reflection parameters. The upper right image is the estimated shape of the object. The lower right image is a synthesized image under different illumination distribution and viewing direction. The result is not so good yet, so we must improve the accuracy of the measurement system. Synthesized image Rendered image under different illumination & view
30
Conclusions Estimated geometrical, photometrical, environmental information in one integrated framework Shape from polarization Surface reflection parameters from iterative computation Illumination from intensity peak We proposed a method to estimate geometrical, photometrical, environmental information from a single view with an inexpensive measurement system. Shape of the object is obtained by analyzing the polarization effect of the diffusely reflected light. Reflection parameters are estimated by iterative computation, and direction of light sources are determined from the position of the intensity peak of specular component image.
31
Application to digital archiving project
Multiple View Modeling a statue in a room IBR with surface normal reflection parameters One of our future work corresponding to the digital archiving project is to model a statue settled in a room. By taking the data from multiple direction, Image-Based-Rendering technique that estimates surface normal and reflection parameters preserves the appearance of cultural assets realistically. Photorealistic preservation
32
Fin And that’s the end of my talk. Thank you very much.
33
Daisuke Miyazaki 2003 Creative Commons Attribution 4
Daisuke Miyazaki 2003 Creative Commons Attribution 4.0 International License. D. Miyazaki, R. T. Tan, K. Hara, K. Ikeuchi, "Polarization-based Inverse Rendering from Single View," in Proceedings of International Symposium on the CREST Digital Archiving Project, pp.51-65, Tokyo, Japan,
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.