1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.

Slides:



Advertisements
Similar presentations
1 Photometric Stereo Reconstruction Dr. Maria E. Angelopoulou.
Advertisements

Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
3D Modeling from a photograph
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
Computer Vision Laboratory 1 Unrestricted Recognition of 3-D Objects Using Multi-Level Triplet Invariants Gösta Granlund and Anders Moe Computer Vision.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Edge Detection CSE P 576 Larry Zitnick
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Shadow Removal Seminar
© 2006 by Davi GeigerComputer Vision April 2006 L1.1 Binocular Stereo Left Image Right Image.
A Real-Time for Classification of Moving Objects
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Photometric Stereo & Shape from Shading
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
lecture 2, linear imaging systems Linear Imaging Systems Example: The Pinhole camera Outline  General goals, definitions  Linear Imaging Systems.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Reflectance Map: Photometric Stereo and Shape from Shading
Section 13.4 The Cross Product. Torque Torque is a measure of how much a force acting on an object causes that object to rotate –The object rotates around.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Verifying the “Consistency” of Shading Patterns and 3-D Structures Pawan Sinha & Edward Adelson.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Capturing light Source: A. Efros.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
Understanding the effect of lighting in images Ronen Basri.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
3D Imaging Motion.
December 11, 2002MVA20021 Determining Shapes of Transparent Objects from Two Polarization Images Daisuke Miyazaki Masataka Kagesawa Katsushi Ikeuchi The.
November 4, THE REFLECTANCE MAP AND SHAPE-FROM-SHADING.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
381 Self Organization Map Learning without Examples.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Unit 10 Transformations. Lesson 10.1 Dilations Lesson 10.1 Objectives Define transformation (G3.1.1) Differentiate between types of transformations (G3.1.2)
9.3 – Perform Reflections. Reflection: Transformation that uses a line like a mirror to reflect an image Line of Reflection: Mirror line in a reflection.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
MAN-522 Computer Vision Spring
Instructor: S. Narasimhan
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
Image formation and the shape from shading problem
Image formation and the shape from shading problem
SHAPE FROM SHADING (Calculus of Variations Approach)
Camera Calibration Using Neural Network for Image-Based Soil Deformation Measurement Systems Zhao, Honghua Ge, Louis Civil, Architectural, and Environmental.
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Unit 7: Transformations
Depthmap Reconstruction Based on Monocular cues
What fraction of the incident light is reflected toward the viewer?
Reflections in Coordinate Plane
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA

2 Background Neural Network approach to Shape-from- shading by Sejnowski et al. (1987) Realtime implementation of photometric stereo using LUT (Lookup Table) by Woodham (1994)

3 Background Neural Network Based Photometric Stereo and extensions by Iwahori et al. (since 1995) Required Conditions –Calibration Sphere and Test Object has the same reflectance property under the same imaging conditions (taken under different directions of multiple light source)

4 Proposed Approach Neural network implementation of photometric stereo for a rotational object with non-uniform reflectance factor. We require no separate calibration object, instead self-calibration is done using controlled rotation of the target object itself.

5 Three Light Source Photometry Eliminating the effect of the reflectance factor gives where Let be image intensity and let be reflectance map for unit surface normal vector at each point

6 Observation System Turn Table x x y y -z Camera Light Source Object The target object is observed through a full 360 degrees of rotation under three separate illumination conditions.

7 Occluding Boundary representation is used at the occluding boundary. In stereographic projection, points on an occluding boundary lie on the circle Except at such points, surface gradient parameters where is given by using.

8 Extraction of Feature Points Geometric information –At an occluding boundary, the surface normal is perpendicular to both the tangent to the occluding contour itself and to the viewing direction. x Image Plane Occluding Boundary y z Surface Normal Viewing Direction

9 Extraction of Feature Points Gaussian sphere The current is determined from, and as follows: : radius of unit Gaussian sphere(=1) : horizontal distance to the rotation axis at each occluding boundary point g f 0

10 Use of Photometric Constraint The image irradiance of a tracked feature point ought to become gradually higher from the left occluding boundary to rotation axis and then become gradually lower towards the right occluding boundary Rotation angle Rotation axis Image intensity increase decrease

11 Use of Photometric Constraint Vertical axis: image intensity Horizontal axis: rotation angle Photometric constraint Rotation axis All points on an occluding boundary Rotation axis

12 Use of Photometric Constraint Examples of plot for points which don’t satisfy the photometric constraint. Vertical axis: image intensity Horizontal axis: rotation angle Rotation axis Rotation axis

13 Extraction of Training Data Set Points which happen to include the same value of are sorted and the median value is selected as one unique point. f g Feature point is 0 plane For each (f,g) value, the representative feature point is selected for the training data set for NN learning. The relation of image irradiance

14 Extraction of Training Data Set Vertical axis: image intensity Horizontal axis: rotation angle Photometric constraint Rotation axis Unique combination Rotation axis

15 Neural Network Learning n(1,1) n(1,2) a(1,1)n(2,1) n1 b(1,1) b(1,2) a(1,2) b(2,1) b(2,2) n(2,2) 11 n(1,P) b(1,P)b(2,3) n(2,3) 11 w(3,P) w(1,1) n2 n3 a(1,3) a(P,1 ) a(P,3) 11 ……. … E1’E1’ E2’E2’ E3’E3’

16 What this RBF NN does. Once learning is complete, that has been learned is represented by the weight connecting each RBF neural network. The resulting network generalizes in that it predicts a surface normal, given any to. Thus, the resulting network can be used to estimate the surface shape of the target object.

17 Experiment 90[deg] 0[deg]±180[deg] Test object (Light source 1)

18 Obtained Data Set as Needle Diagram

19 f g Feature Points onto Space

20 Results AspectSlope

21 Another Example of Input Images

22 Results AspectSlope

23 Conclusion Neural network based photometric stereo using self-calibration, was proposed. No calibration object is required to obtain shape of target object using geometric and photometric constraints. Empirical implementation has been performed. To detect and to correct for cast shadows is remained as future work.