Image cues Color (texture) Shading Shadows Specular highlights

Slides:



Advertisements
Similar presentations
A Projective Framework for Radiometric Image Analysis CVPR 2009 Date: 2010/3/9 Reporter : Annie lin.
Advertisements

Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
3D Modeling from a photograph
Environment Mapping CSE 781 – Roger Crawfis
Machine Vision Laboratory The University of the West of England Bristol, UK 2 nd September 2010 ICEBT Photometric Stereo in 3D Biometrics Gary A. Atkinson.
Lighting affects appearance. What is the Question ? (based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Shape and Reflectance Estimation Techniques Dana Cobzas PIMS postdoc.
Structure from motion.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University.
Computational Fundamentals of Reflection COMS , Lecture
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Multi-view stereo Many slides adapted from S. Seitz.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
The plan for today Camera matrix
Lecture 2 Image cues Shading, Stereo, Specularities Dana Cobzas PIMS Postdoc.
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
A Signal-Processing Framework for Forward and Inverse Rendering Ravi Ramamoorthi Stanford University Columbia University: Feb 11, 2002.
Object recognition under varying illumination. Lighting changes objects appearance.
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Photometric Stereo & Shape from Shading
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Structure from images. Calibration Review: Pinhole Camera.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Brief Introduction to Geometry and Vision
Image cues Color (texture) Shading Shadows Specular highlights Silhouette.
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Shape from Shading and Texture. Lambertian Reflectance Model Diffuse surfaces appear equally bright from all directionsDiffuse surfaces appear equally.
Real-Time Rendering Digital Image Synthesis Yung-Yu Chuang 01/03/2006 with slides by Ravi Ramamoorthi and Robin Green.
Sources, Shadows, and Shading
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University SIGGRAPH 2001 Stanford University SIGGRAPH.
Lighting affects appearance. How do we represent light? (1) Ideal distant point source: - No cast shadows - Light distant - Three parameters - Example:
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Understanding the effect of lighting in images Ronen Basri.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
A survey of Light Source Detection Methods Nathan Funk University of Alberta Nov
November 4, THE REFLECTANCE MAP AND SHAPE-FROM-SHADING.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Announcements Final Exam Friday, May 16 th 8am Review Session here, Thursday 11am.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Lighting affects appearance. Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
EECS 274 Computer Vision Sources, Shadows, and Shading.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Announcements Project 3a due today Project 3b due next Friday.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Instructor: S. Narasimhan
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Merle Norman Cosmetics, Los Angeles
Structure from motion Input: Output: (Tomasi and Kanade)
Announcements Project 3b due Friday.
Reconstruction.
Depthmap Reconstruction Based on Monocular cues
Announcements Today: evals Monday: project presentations (8 min talks)
Lecture 28: Photometric Stereo
Announcements Project 3 out today demo session at the end of class.
Shape from Shading and Texture
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Image cues Color (texture) Shading Shadows Specular highlights Silhouette

Image cues Shading [reconstructs normals] shape from shading (SFS) photometric stereo Specular highlights Texture [reconstructs 3D] stereo (relates two views) Silhouette [reconstructs 3D] shape from silhouette [Focus] [ignore, filtered] [parametric BRDF]

Geometry from shading Shading reveals 3D shape geometry Shape from Shading One image Known light direction Known BRDF (unit albedo) Ill-posed : additional constraints (intagrability …) Photometric Stereo Several images, different lights Unknown Lambertian BRDF Known lights Unknown lights Reconstruct normals Integrate surface [Silver 80, Woodman 81] [Horn]

Shading Lambertian reflectance albedo normal light dir Fixing light, albedo, we can express reflectance only as function of normal.

Surface parametrization Gradient space -1 (p,q,-1) z=f(x,y) Surface orientation depth x y Surface Tangent plane Normal vector

Lambertian reflectance map Local surface orientation that produces equivalent intensities are quadratic conic sections contours in gradient space ps=0,qs=0 ps=-2,qs=-1

Photometric stereo One image =one light direction Two images = two light directions A third image disambiguates between the two. Normal = intersection of 3 curves E1 E2 Radiance of one pixel constrains the normal to a curve Specular reflectance

Photometric stereo [Birkbeck] Given: n>=3 images with different known light dir. (infinite light) Assume: Lambertain object orthograhic camera ignore shadows, interreflections One image, one light direction n images, n light directions Recover Albedo = magnitude Normal = normalized

Depth from normals (1) [D. Kriegman] Integrate normal (gradients p,q) across the image Simple approach – integrate along a curve from From Integrate along to get Integrate along each column

Depth from normals (2) Integrate along a curve from Might not go back to the start because of noise – depth is not unique Impose integrability A normal map that produces a unique depth map is called integrable Enforced by [Escher] no integrability

Impose integrabilty [Horn – Robot Vision 1986] Solve f(x,y) from p,q by minimizing the cost functional Iterative update using calculus of variation Integrability naturally satisfied F(x,y) can be discrete or represented in terms of basis functions Example : Fourier basis (DFT)-close form solution [Frankfot, Chellappa A method for enforcing integrability in SFS Alg. PAMI 1998]

Example integrability [Neil Birkbeck ] images with different light normals Integrated depth original surface reconstructed

Image cues Shading, Stereo, Specularities Color (texture) Shading Shadows Specular highlights Silhouette Image cues Shading, Stereo, Specularities Readings: See links on web page Books: Szeliski 2.2, Ch 12 Forsythe Ch 4,5 (Lab related) .pdf on web site)

Upcoming Lab 4 due Apr 1 Exam 2: In-class Apr 5, same format as E1 Calculator and 4 sheets of your notes. Project presentations: Inclass Apr 10, 12 Present the motivation, related literature and libraries Present your progress to-date Prepare 5-10min presentation/person. Project report: Hand in at end of classes, or before demo. (With an earlier hand-in I may have time to comment and you can polish it for a final hand-in) Project demos Demo your project in the lab Schedule: discuss/decide in class

All images Unknown lights and normals : It is possible to reconstruct the surface and light positions ? What is the set of images of an object under all possible light conditions ? [Debevec et al]

Space of all images Problem: Lambertian object Single view, orthographic camera Different illumination conditions (distant illumination) 1. 3D subspace: [Moses 93][Nayar,Murase 96][Shashua 97] 2. Illumination cone: [Belhumeur and Kriegman CVPR 1996] 3. Spherical harmonic representation: [Ramamoorthi and Hanharan Siggraph 01] [Barsi and Jacobs PAMI 2003] + convex obj (no shadows) 3D subspace Convex cone Linear combination of harmonic imag. (practical 9D basis)

3D Illumination subspace Lambertian reflection : (one image point x) Whole image : (image as vector I) scene The set of images of a Lambertain scene surface with no shadowing is a subset of a 3D subspace. [Moses 93][Nayar,Murase 96][Shashua 97] Same scene! basis All images All lights x3 = L image l1 l2 l3 l4 x1 x2 x3 x4 B x1 x2

Reconstructing the basis Any three images without shadows span L. L – represented by an orthogonal basis B. How to extract B from images ? PCA

Shadows No shadows Shadows Ex: images with all pixels illuminated Single light source Li intersection of L with an orthant i of Rn corresponding cell of light source directions Si for which the same pixels are in shadow and the same pixels are illuminated. P(Li) projection of Li that sets all negative components of Li to 0 (convex cone) The set of images of an object produces by a single light source is :

Shadows and multiple images Shadows, multiple lights The image illuminated with two light sources l1, l2, lies on the line between the images of x1 and x2. The set of images of an object produces by an arbitrary number of lights is the convex hull of U = illumination cone C.

Illumination cone The set of images of a any Lambertain object under all light conditions is a convex cone in the image space. [Belhumeur,Kriegman: What is the set of images of an object under all possible light conditions ?, IJCV 98]

Do ambiguities exist ? Can two different objects produce the same illumination cone ? YES “Bas-relief” ambiguity Convex object B span L Any AGL(3), B*=BA span L I=B*S*=(BA)(A-1S)=BS Same image B lighted with S and B* lighted with S* When doing PCA the resulting basis is generally not normal*albedo

GBR transformation Surface integrability : [Belhumeur et al: The bas-relief ambiguity IJCV 99] Surface integrability : Real B, transformed B*=BA is integrable only for General Bas Relief transformation.

Uncalibrated photometric stereo Without knowing the light source positions, we can recover shape only up to a GBR ambiguity. From n input images compute B* (SVD) Find A such that B* A close to integrable Integrate normals to find depth. Comments GBR preserves shadows [Kriegman, Belhumeur 2001] If albedo is known (or constant) the ambiguity G reduces to a binary subgroup [Belhumeur et al 99] Interreflections : resolve ambiguity [Kriegman CVPR05]

Spherical harmonic representation Theory : infinite no of light directions space of images infinite dimensional [Illumination cone, Belhumeur and Kriegman 96] Practice : (empirical ) few bases are enough [Hallinan 94, Epstein 95] = .2 +.3 +… Simplification : Convex objects (no shadows, intereflections) [Ramamoorthi and Hanharan: Analytic PCA construction for Theoretical analysis of Lighting variability in images of a Lambertian object: SIGGRAPH01] [Barsi and Jacobs: Lambertain reflectance and linear subspaces: PAMI 2003]

Basis approximation

Spherical harmonics basis Analog on the sphere to the Fourier basis on the line or circle Angular portion of the solution to Laplace equation in spherical coordinates Orthonormal basis for the set of all functions on the surface of the sphere Normalization factor Legendre functions Fourier basis

Illustration of SH x,y,z space 1 2 -2 -1 1 2 Positive Negative x,y,z space coordinates; 1 polar coordinates 2 . -2 -1 1 2 odd components even components

Example of approximation Efficient rendering known shape complex illumination (compressed) Exact image 9 terms approximation [Ramamoorthi and Hanharan: An efficient representation for irradiance enviromental map Siggraph 01] Not good for hight frequency (sharp) effects ! (specularities)

Relation between SH and PCA [Ramamoorthi PAMI 2002] Prediction: 3 basis 91% variance 5 basis 97% Empirical: 3 basis 90% variance 5 basis 94% 33% 16% 4% 2% 42%

Summary: Image cues Color (texture) Shading Shadows Specular highlights Silhouette

Properties of SH Function decomposition f piecewise continuous function on the surface of the sphere where -1 -2 1 2 Rotational convolution on the sphere Funk-Hecke theorem: k circularly symmetric bounded integrable function on [-1,1]

Reflectance as convolution Lambertian reflectance One light Lambertian kernel Integrated light SH representation light Lambertian kernel Lambertian reflectance (convolution theorem)

Convolution kernel Lambertian kernel Asymptotic behavior of kl for large l Second order approximation accounts for 99% variability k like a low-pass filter [Basri & Jacobs 01] [Ramamoorthi & Hanrahan 01] 1 2

From reflectance to images Unit sphere  general shape Rearrange normals on the sphere Reflectance on a sphere Image point with normal

Shape from Shading Given: one image of an object illuminated with a distant light source Assume: Lambertian object, with known, or constant albedo (usually assumes 1) orthograhic camera known light direction ignore shadows, interreflections Recover: normals Surface Gradient space Normal Lambertian reflectance: depends only on n (p,q): Radiance of one pixel constrains the normal to a curve ILL-POSED

Variational SFS Image info shading Recovers Integrated normals Defined by Horn and others in the 70’s. Variational formulation regularization Showed to be ill –posed [Brooks 92] (ex . Ambiguity convex/concave) Classical solution – add regularization, integrability constraints Most published algorithms are non-convergent [Duron and Maitre 96]

Examples of results Tsai and Shah’s method 1994 Pentland’s method 1994 Synthetic images

Well posed SFS [Prados ICCV03, ECCV04] reformulated SFS as a well-posed problem Lambertian reflectance Orthographic camera x f(x) (x ,f(x)) Perspective camera (x,- ) f(x)(x,-) Hamilton-Jacobi equations - no smooth solutions; - require boundary conditions

Well-posed SFS (2) Hamilton-Jacobi equations - no smooth solutions; - require boundary conditions Solution Impose smooth solutions – not practical because of image noise Compute viscosity solutions [Lions et al.93] (smooth almost everywhere) still require boundary conditions E. Prados :general framework – characterization viscosity solutions. (based on Dirichlet boundary condition) efficient numerical schemes for orthogonal and perspective camera showed that SFS is a well-posed for a finite light source [Prados ECCV04]

Shading: Summary Space of all images : 3D subspace Illumination cone: Lambertian object Distant illumination One view (orthographic) + Convex objects 3D subspace Illumination cone: 2. Spherical harmonic representation: 3D subspace Convex cone Linear combination of harmonic imag. (practical 9D basis) Reconstruction : Single light source One image Unit albedo Known light Multiple imag/1 view Arbitrary albedo + Unknown light Ill-posed + additional constraints GBR ambiguity Family of solutions Shape from shading Photometric stereo Uncalibrated photometric stereo

Extension to multiple views Problem: PS/SFS one view incomplete object Solution : extension to multiple views – rotating obj., light var. Problem: we don’t know the pixel correspondence anymore Solution: iterative estimation: normals/light – shape initial surface from SFM or visual hull Input images Initial surface Refined surface 1. Kriegman et al ICCV05; Zhang, Seitz … ICCV 03 2. Cipolla, Vogiatzis ICCV05, CVPR06 SFM Visual hull

Multiview PS+ SFM points [Kriegman et al ICCV05][Zhang, Seitz … ICCV 03] 1. SFM from corresponding points: camera & initial surface (Tomasi Kanade) 2. Iterate: factorize intensity matrix : light, normals, GBR ambiguity Integrate normals Correct GBR using SFM points (constrain surface to go close to points) images Initial surface Integrated surface Rendered Final surface

Multiview PS + frontier points [Cipolla, Vogiatzis ICCV05, CVPR06] 1. initial surface SFS visual hull – convex envelope of the object 2. initial light positions from frontier points plane passing through the point and the camera center is tangent to the object > known normals 3. Alternate photom normals / surface (mesh) v photom normals n surface normals – using the mesh mesh –occlusions, correspondence in I

Multiview PS + frontier points

Stereo [ Assumptions two images Lambertian reflectance [Birkbeck] [ Assumptions two images Lambertian reflectance textured surfaces] Image info Recover per pixel depth Approach triangulation of corresponding points corresponding points recovered correlation of small parches around each point calibrated cameras – search along epipolar lines texture

Rectified images

Disparity Z Disparity d xl xr Z=f d (0,0) (B,0) X

Correlation scores With respect to first image Point: Calibrated cameras: pixel in I1 pixel in I2 Small planar patch: Plane parallel with image planes, no illumination variation 2. Compensate for illumination change 3. Arbitrary plane

Specular surfaces Reflectance equation require: BRDF, light position Image info Approaches Filter specular highlights (brightness, appear at sharp angles) Parametric reflectance Non-parametric reflectance map (discretization of BRDF) Account for general reflectance Helmholz reciprocity [Magda et al ICCV 01, IJCV03] shading+specular highlights

Shape and Materials by Example [Hertzmann, Seitz CVPR 2003 PAMI 2005] Reconstructs objects with general BRDF with no illumination info. Idea : A reference object from the same material but with known geometry (sphere) is inserted into the scene. Reference images Multiple materials Results

Summary of image cues Reflectance Light + - stereo textured Lambertian Constant [SAD] Rec. texture Rec. depth discont. Complete obj Needs texture Occlusions Varying [NCC] shading uniform Lamb Constant [SFS] Uniform material Not robust Needs light pose unif/textured Varying [PS] Unif/varying albedo Do not reconstr depth disc., one view