Machine Vision Laboratory The University of the West of England Bristol, UK 2 nd September 2010 ICEBT Photometric Stereo in 3D Biometrics Gary A. Atkinson.

Slides:



Advertisements
Similar presentations
1 Photometric Stereo Reconstruction Dr. Maria E. Angelopoulou.
Advertisements

A Projective Framework for Radiometric Image Analysis CVPR 2009 Date: 2010/3/9 Reporter : Annie lin.
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
3D Modeling from a photograph
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Computer Vision CS 776 Spring 2014 Photometric Stereo and Illumination Cones Prof. Alex Berg (Slide credits to many folks on individual slides)
Computer vision: models, learning and inference
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Based on slides created by Edward Angel
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Course Review CS/ECE 181b Spring Topics since Midterm Stereo vision Shape from shading Optical flow Face recognition project.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Objectives Learn to shade objects so their images appear three- dimensional Learn to shade objects so their images appear three- dimensional Introduce.
Basic Principles of Surface Reflectance Lecture #3 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Object recognition under varying illumination. Lighting changes objects appearance.
Photometric Stereo & Shape from Shading
Light and shading Source: A. Efros.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Structure from images. Calibration Review: Pinhole Camera.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision - A Modern Approach Set: Sources, shadows and shading Slides by D.A. Forsyth Sources and shading How bright (or what colour) are objects?
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Shape from Shading and Texture. Lambertian Reflectance Model Diffuse surfaces appear equally bright from all directionsDiffuse surfaces appear equally.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Capturing light Source: A. Efros.
Sources, Shadows, and Shading
Understanding the effect of lighting in images Ronen Basri.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
A survey of Light Source Detection Methods Nathan Funk University of Alberta Nov
November 4, THE REFLECTANCE MAP AND SHAPE-FROM-SHADING.
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Computer Graphics Laboratory, Hiroshima City University All images are compressed. The images in this file is not the original.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Announcements Final Exam Friday, May 16 th 8am Review Session here, Thursday 11am.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Local Illumination and Shading
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Understanding the effect of lighting in images Ronen Basri.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
EECS 274 Computer Vision Sources, Shadows, and Shading.
Announcements Project 3a due today Project 3b due next Friday.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Instructor: S. Narasimhan
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Merle Norman Cosmetics, Los Angeles
Parallel Integration of Video Modules
Announcements Project 3b due Friday.
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Depthmap Reconstruction Based on Monocular cues
Lecture 28: Photometric Stereo
Announcements Project 3 out today demo session at the end of class.
Shape from Shading and Texture
All images are compressed.
Presentation transcript:

Machine Vision Laboratory The University of the West of England Bristol, UK 2 nd September 2010 ICEBT Photometric Stereo in 3D Biometrics Gary A. Atkinson PSG College of Technology Coimbatore, India

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

1. What is Photometric Stereo? A method of estimating surface geometry using multiple light source directions Captured images

1. What is Photometric Stereo? A method of estimating surface geometry using multiple light source directions Captured imagesSurface normals

1. What is Photometric Stereo? A method of estimating surface geometry using multiple light source directions Depth map Surface normals

1. What is Photometric Stereo? A method of estimating surface geometry using multiple light source directions Depth map Surface normals

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

2. Why photometric Stereo? Why 3D?Why PS in particular?

2. Why photometric Stereo? Why 3D?Why PS in particular? 1.Robust face recognition 2.Ambient illumination independence 3.Facilitates pose correction 4.Non-contact fingerprint analysis Richer dataset in 3D

2. Why photometric Stereo? Why 3D?Why PS in particular? 1.Robust face recognition 2.Ambient illumination independence 3.Facilitates pose, expression correction 4.Non-contact fingerprint analysis Same day Same camera Same expression Same pose Same background Even same shirt Different illumination Completely different image

2. Why photometric Stereo? Why 3D?Why PS in particular? 1.Robust face recognition 2.Ambient illumination independence 3.Facilitates pose, expression correction 4.Non-contact fingerprint analysis

2. Why photometric Stereo? Why 3D?Why PS in particular? 1.Robust face recognition 2.Ambient illumination independence 3.Facilitates pose, expression correction 4.Non-contact fingerprint analysis

2. Why photometric Stereo? Why 3D?Why PS in particular? 1.Captures reflectance data for 2D matching 2.Compares well to other methods *Depending on correspondence density SFSShape-from-shading GSGeometric stereo LTLaser triangulation PPTProjected pattern triangulation PSPhotometric stereo

2. Why Photometric Stereo? – Alternatives Shape-from-Shading Estimate surface normals from shading patterns.  More to follow...

2. Why Photometric Stereo? – Alternatives Geometric stereo

2. Why Photometric Stereo? – Alternatives Geometric stereo [Li Wei, and Eung-Joo Lee, JDCTA 2010][Wang, et al. JVLSI 2007]

2. Why Photometric Stereo? – Alternatives Laser triangulation

2. Why Photometric Stereo? – Alternatives Laser triangulation General case

2. Why Photometric Stereo? – Alternatives Laser triangulation

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation Greyscale camera Pattern projector Colour camera Flash Greyscale camera Target

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation

2. Why Photometric Stereo? – Alternatives Projected Pattern Triangulation

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

3. The Basic Method – Assumptions θ I = radiance (intensity) N = unit normal vector L = unit source vector θ = angle between N and L 1.No cast/self-shadows or specularities 2.Greyscale/linear imaging 3.Distant and uniform light sources 4.Orthographic projection 5.Static surface 6.Lambertian reflectance Assumptions: [Argyriou and Petrou]

3. The Basic Method – Preliminary notes Equation of a plane Surface normal vector to plane

3. The Basic Method – Preliminary notes Equation of a plane Surface normal vector to plane

3. The Basic Method – Preliminary notes Equation of a plane Surface normal vector to plane

3. The Basic Method – Preliminary notes Equation of a plane Surface normal vector to plane Rescaled surface normal

3. The Basic Method – Preliminary notes Equation of a plane Surface normal vector to plane Rescaled surface normal typically written

3. The Basic Method – Reflectance Equation Consider the imaging of an object with one light source.

Lambertian reflection E = emittance  = albedo L = irradiance 3. The Basic Method – Reflectance Equation

Lambertian reflection E = emittance  = albedo L = irradiance 3. The Basic Method – Reflectance Equation Take scalar product of light source vector and normal vector to obtain cos  :

Perform substitution with Lambert’s law 3. The Basic Method – Shape-from-Shading

Perform substitution with Lambert’s law 3. The Basic Method – Shape-from-Shading If  is re-defined and the camera response is linear, then

Perform substitution with Lambert’s law Known: p s, q s, I Unknown: p, q,  1 equation, 3 unknowns  Insoluble 3. The Basic Method – Shape-from-Shading If  is re-defined and the camera response is linear, then

Increasing I /  Curves of constant I /  3. The Basic Method – Shape-from-Shading For a given intensity measurement, the values of p and q are confined to one of the lines (circles here) in the graph.

Increasing I /  3. The Basic Method – Shape-from-Shading This is the result for a different light source direction. Shadow boundary For a given intensity measurement, the values of p and q are confined to one of the lines (circles here) in the graph.

3. The Basic Method – Shape-from-Shading Representation using conics Possible cone of normal vectors forming a conic section.

3. The Basic Method – Shape-from-Shading Representation on unit sphere We can overcome the cone ambiguity using several light source directions: Photometric Stereo

Two sources: normal confined to two points – Or fully constrained if the albedo is known 3. The Basic Method – Two Sources

Three sources: fully constrained 3. The Basic Method – Three Sources

3. The Basic Method – Matrix Representation (using slightly different symbols to aid clarity) Unit surface normal Unit light source vector

3. The Basic Method – Matrix Representation (using slightly different symbols to aid clarity) Unit surface normal Unit light source vector

3. The Basic Method – Matrix Representation (using slightly different symbols to aid clarity) Unit surface normal Unit light source vector

3. The Basic Method – Matrix Representation Recall that, based on the surface gradients,

3. The Basic Method – Matrix Representation Substituting and re-arranging these equations gives us That is, the surface normals/gradient and albedo have been determined. Recall that, based on the surface gradients,

3. The Basic Method – Linear Algebra Note That is, the surface normals/gradient and albedo have been determined. If the three light source vectors are co-planar, then the light source matrix becomes non-singular – i.e. cannot be inverted, and the equations are insoluble.

3. The Basic Method – Example results Surface normals Albedo map More on applications of this later... What about the depth?

3. The Basic Method – Surface Integration Recall the relation between surface normal and gradient: Memory jogger from calculus:

3. The Basic Method – Surface Integration Recall the relation between surface normal and gradient: So the height can be determined via an integral (or summation in the discrete world of computer vision). This can be written in several ways: Memory jogger from calculus:

3. The Basic Method – Surface Integration So we can generate a height map by summing up individual normal components. Simple, right? WRONG!!!

3. The Basic Method – Surface Integration WRONG!!! Depends on the path taken Is affected by noise Cannot handle discontinuities Suffers from non-integrable regions (there’s some overlap here) Further details are beyond the scope of this talk. So we can generate a height map by summing up individual normal components. Simple, right?

3. The Basic Method – Surface Integration

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

4. Advanced Methods Recall the assumptions of standard PS: 1.No cast/self-shadows or specularities 2.Greyscale imaging 3.Distant and uniform light sources 4.Orthographic projection 5.Static surface 6.Lambertian reflectance

4. Advanced Methods Recall the assumptions of standard PS: 1.No cast/self-shadows or specularities 2.Greyscale imaging 3.Distant and uniform light sources 4.Orthographic projection 5.Static surface 6.Lambertian reflectance The two highlighted assumptions are the two most problematic, and studied, assumptions.

4. Advanced Methods – Shadows and Specularities Recall that: Suppose we have a fourth light source - Apply the above equation for each combination of three light sources.  Four normal estimates - Where error in estimates > camera noise an anomaly is present.  Ignore one light source here [Coleman and Jain-esque]

4. Advanced Methods – Shadows and Specularities / Lambert’s Law Due to the linear dependence of light source vectors, we know that For some combination of constants a 1, a 2, a 3 and a 4. [Barsky and Petrou]

4. Advanced Methods – Shadows and Specularities / Lambert’s Law [Barsky and Petrou] Multiply by albedo and take scalar product with surface normal: Due to the linear dependence of light source vectors, we know that For some combination of constants a 1, a 2, a 3 and a 4.

4. Advanced Methods – Shadows and Specularities / Lambert’s Law [Barsky and Petrou] Multiply by albedo and take scalar product with surface normal: Due to the linear dependence of light source vectors, we know that For some combination of constants a 1, a 2, a 3 and a 4.

4. Advanced Methods – Shadows and Specularities / Lambert’s Law [Barsky and Petrou] Multiply by albedo and take scalar product with surface normal: Where this is satisfied (subject to the confines of camera noise) the pixel is not specular, not in shadow and follows Lambert’s law. Due to the linear dependence of light source vectors, we know that For some combination of constants a 1, a 2, a 3 and a 4.

4. Advanced Methods – Lambert’s Law / Gauges Attempt to overcome lighting non-uniformities and non-Lambertian behaviour using gauges. Image scene in the presence of a “gauge” object of known geometry and uniform albedo. Seek a mapping between normals on the object with normals on the gauge. [Leitão et al.]

Overview 1. What is photometric stereo? 2. Why photometric stereo? 3. The basic method 4. Advanced methods 5. Applications

Face recognition Weapons detection Archaeology Skin analysis Fingerprint recognition

5. Applications – Face recognition Recognition rates (on 61 subjects) 2D Photograph:91.2% Surface normals:97.5% Computer monitor [Schindler]

5. Applications – Fingerprint recognition Demo [Sun et al.]

Conclusion 1. What is photometric stereo? Shape / normal estimation from multiple lights 2.Why photometric stereo? Cheap, high resolution, efficient 3. The basic method – Covered 4. Advanced methods – Introduced 5. Applications Faces, weapons, fingerprints

Thank You Questions? 2 rd September 2010 Photometric Stereo in 3D Biometrics