Lecture 2 Image cues Shading, Stereo, Specularities Dana Cobzas PIMS Postdoc.

Slides:



Advertisements
Similar presentations
A Projective Framework for Radiometric Image Analysis CVPR 2009 Date: 2010/3/9 Reporter : Annie lin.
Advertisements

Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
3D Modeling from a photograph
Environment Mapping CSE 781 – Roger Crawfis
Machine Vision Laboratory The University of the West of England Bristol, UK 2 nd September 2010 ICEBT Photometric Stereo in 3D Biometrics Gary A. Atkinson.
Spherical Convolution in Computer Graphics and Vision Ravi Ramamoorthi Columbia Vision and Graphics Center Columbia University SIAM Imaging Science Conference:
Lighting affects appearance. What is the Question ? (based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
A Signal-Processing Framework for Inverse Rendering Ravi Ramamoorthi Pat Hanrahan Stanford University.
Shape and Reflectance Estimation Techniques Dana Cobzas PIMS postdoc.
Structure from motion.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University.
Computational Fundamentals of Reflection COMS , Lecture
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Multi-view stereo Many slides adapted from S. Seitz.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
A Signal-Processing Framework for Forward and Inverse Rendering Ravi Ramamoorthi Stanford University Columbia University: Feb 11, 2002.
Object recognition under varying illumination. Lighting changes objects appearance.
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Photometric Stereo & Shape from Shading
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Structure from images. Calibration Review: Pinhole Camera.
Projects  Any Q/A or any help TA and I can provide?  Read most proposals and ed feedback. Feel free to engage in dialogue with TA/me on how to proceed/
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Brief Introduction to Geometry and Vision
Image cues Color (texture) Shading Shadows Specular highlights Silhouette.
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Shape from Shading and Texture. Lambertian Reflectance Model Diffuse surfaces appear equally bright from all directionsDiffuse surfaces appear equally.
Real-Time Rendering Digital Image Synthesis Yung-Yu Chuang 01/03/2006 with slides by Ravi Ramamoorthi and Robin Green.
Sources, Shadows, and Shading
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University SIGGRAPH 2001 Stanford University SIGGRAPH.
Lighting affects appearance. How do we represent light? (1) Ideal distant point source: - No cast shadows - Light distant - Three parameters - Example:
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Understanding the effect of lighting in images Ronen Basri.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
November 4, THE REFLECTANCE MAP AND SHAPE-FROM-SHADING.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Announcements Final Exam Friday, May 16 th 8am Review Session here, Thursday 11am.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Lighting affects appearance. Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Understanding the effect of lighting in images Ronen Basri.
EECS 274 Computer Vision Sources, Shadows, and Shading.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Announcements Project 3a due today Project 3b due next Friday.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Instructor: S. Narasimhan
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Merle Norman Cosmetics, Los Angeles
Structure from motion Input: Output: (Tomasi and Kanade)
Announcements Project 3b due Friday.
Depthmap Reconstruction Based on Monocular cues
Announcements Today: evals Monday: project presentations (8 min talks)
Image cues Color (texture) Shading Shadows Specular highlights
Lecture 28: Photometric Stereo
Shape from Shading and Texture
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Lecture 2 Image cues Shading, Stereo, Specularities Dana Cobzas PIMS Postdoc

Image cues Color (texture) Shading Shadows Specular highlights Silhouette

Image cues Shading [reconstructs normals] shape from shading (SFS) photometric stereo Specular highlights Texture [reconstructs 3D] stereo (relates two views) Silhouette [reconstructs 3D] shape from silhouette [Focus] [ignore, filtered] [parametric BRDF]

Geometry from shading Shading reveals 3D shape geometry Shape from Shading One image Known light direction Known BRDF (unit albedo) Ill-posed : additional constraints (intagrability …) Photometric Stereo Several images, different lights Unknown Lambertian BRDF 1.Known lights 2.Unknown lights Reconstruct normals Integrate surface [Silver 80, Woodman 81] [Horn]

Shading albedo normal light dir Lambertian reflectance Fixing light, albedo, we can express reflectance only as function of normal.

Surface parametrization x z=f(x,y) y depth Surface orientation Surface Tangent plane Normal vector Gradient space (p,q,-1)

Lambertian reflectance map p s =0,q s =0 p s =-2,q s =-1 Local surface orientation that produces equivalent intensities are quadratic conic sections contours in gradient space

Specular reflectance Photometric stereo One image =one light direction Radiance of one pixel constrains the normal to a curve Two images = two light directions A third image disambiguates between the two. Normal = intersection of 3 curves E1 E2

Photometric stereo Recover Albedo = magnitude Normal = normalized [Birkbeck] One image, one light direction n images, n light directions Given: n>=3 images with different known light dir. (infinite light) Assume: Lambertain object orthograhic camera ignore shadows, interreflections

Depth from normals (1) Integrate normal (gradients p,q) across the image Simple approach – integrate along a curve from 1.From 2.Integrate along to get 3.Integrate along each column [D. Kriegman]

Depth from normals (2) Integrate along a curve from Might not go back to the start because of noise – depth is not unique Impose integrability A normal map that produces a unique depth map is called integrable Enforced by [Escher] no integrability

Impose integrabilty [Horn – Robot Vision 1986] Solve f(x,y) from p,q by minimizing the cost functional  Iterative update using calculus of variation  Integrability naturally satisfied  F(x,y) can be discrete or represented in terms of basis functions Example : Fourier basis (DFT)-close form solution [Frankfot, Chellappa A method for enforcing integrability in SFS Alg. PAMI 1998]

Example integrability images with different light [Neil Birkbeck ] normals Integrated depthoriginal surface reconstructed

All images  Unknown lights and normals : It is possible to reconstruct the surface and light positions ?  What is the set of images of an object under all possible light conditions ? [Debevec et al]

Space of all images Problem:  Lambertian object  Single view, orthographic camera  Different illumination conditions (distant illumination) 1. 3D subspace: [Moses 93][Nayar,Murase 96][Shashua 97] 2. Illumination cone: [Belhumeur and Kriegman CVPR 1996] 3. Spherical harmonic representation: [Ramamoorthi and Hanharan Siggraph 01] [Barsi and Jacobs PAMI 2003] + convex obj (no shadows) 3D subspace Convex cone Linear combination of harmonic imag. (practical 9D basis)

3D Illumination subspace Lambertian reflection : (one image point x ) Whole image : (image as vector I ) The set of images of a Lambertain surface with no shadowing is a subset of a 3D subspace. [Moses 93][Nayar,Murase 96][Shashua 97] = x1x1 x2x2 x3x3 x4x4 B l1l1 l3l3 l2l2 l4l4 L x1x1 x2x2 x3x3 image

Reconstructing the basis PCA  Any three images without shadows span L.  L – represented by an orthogonal basis B.  How to extract B from images ?

Ex: images with all pixels illuminated Shadows No shadows Shadows S1S1 S5S5 S3S3 S2S2 S4S4 S0S0 Single light source  L i intersection of L with an orthant i of R n corresponding cell of light source directions S i for which the same pixels are in shadow and the same pixels are illuminated.  P(L i ) projection of L i that sets all negative components of L i to 0 (convex cone) The set of images of an object produces by a single light source is :

Shadows and multiple images Shadows, multiple lights The image illuminated with two light sources l 1, l 2, lies on the line between the images of x 1 and x 2. The set of images of an object produces by an arbitrary number of lights is the convex hull of U = illumination cone C.

Illumination cone The set of images of a any Lambertain object under all light conditions is a convex cone in the image space. [Belhumeur,Kriegman: What is the set of images of an object under all possible light conditions ?, IJCV 98]

Do ambiguities exist ? Can two different objects produce the same illumination cone ? YES Convex object  B span L  Any A  GL(3), B * =BA span L  I=B * S * =(BA)(A -1 S)=BS Same image B lighted with S and B * lighted with S * When doing PCA the resulting basis is generally not normal*albedo

GBR transformation Surface integrability : Real B, transformed B * = BA is integrable only for General Bas Relief transformation. [Belhumeur et al: The bas-relief ambiguity IJCV 99]

Uncalibrated photometric stereo  Without knowing the light source positions, we can recover shape only up to a GBR ambiguity. Comments  GBR preserves shadows [Kriegman, Belhumeur 2001]  If albedo is known (or constant) the ambiguity G reduces to a binary subgroup [Belhumeur et al 99]  Interreflections : resolve ambiguity [Kriegman CVPR05] 1.From n input images compute B * (SVD) 2.Find A such that B * A close to integrable 3.Integrate normals to find depth.

Spherical harmonic representation Theory : infinite no of light directions space of images infinite dimensional [Illumination cone, Belhumeur and Kriegman 96] Practice : (empirical ) few bases are enough [Hallinan 94, Epstein 95] = +.3 +….2 [Ramamoorthi and Hanharan: Analytic PCA construction for Theoretical analysis of Lighting variability in images of a Lambertian object: SIGGRAPH01] [Barsi and Jacobs: Lambertain reflectance and linear subspaces: PAMI 2003] Simplification : Convex objects (no shadows, intereflections)

Basis approximation

Spherical harmonics basis  Analog on the sphere to the Fourier basis on the line or circle  Angular portion of the solution to Laplace equation in spherical coordinates  Orthonormal basis for the set of all functions on the surface of the sphere Normalization factor Legendre functions Fourier basis

Illustration of SH odd components even components Positive Negative x,y,z space coordinates; polar coordinates

Properties of SH Function decomposition f piecewise continuous function on the surface of the sphere where Rotational convolution on the sphere Funk-Hecke theorem: k circularly symmetric bounded integrable function on [-1,1]

Reflectance as convolution Lambertian reflectance One light Lambertian kernel Integrated light SH representation Lambertian kernel light Lambertian reflectance (convolution theorem)

Convolution kernel Lambertian kernel [Basri & Jacobs 01] [Ramamoorthi & Hanrahan 01]  Second order approximation accounts for 99% variability  k like a low-pass filter Asymptotic behavior of k l for large l

From reflectance to images Unit sphere  general shape Rearrange normals on the sphere Reflectance on a sphere Image point with normal

Example of approximation Exact image 9 terms approximation Not good for hight frequency (sharp) effects ! (specularities) [Ramamoorthi and Hanharan: An efficient representation for irradiance enviromental map Siggraph 01] Efficient rendering  known shape  complex illumination (compressed)

Relation between SH and PCA [Ramamoorthi PAMI 2002] 42% 33% 16% 4% 2% Prediction: 3 basis 91% variance 5 basis 97% Empirical: 3 basis 90% variance 5 basis 94%

Shape from Shading Surface Gradient space Normal Lambertian reflectance: depends only on n (p,q): Radiance of one pixel constrains the normal to a curve ILL-POSED Given: one image of an object illuminated with a distant light source Assume: Lambertian object, with known, or constant albedo (usually assumes 1) orthograhic camera known light direction ignore shadows, interreflections Recover: normals

Variational SFS shading Integrated normals Image info Recovers  Defined by Horn and others in the 70’s.  Variational formulation  Showed to be ill –posed [Brooks 92] (ex. Ambiguity convex/concave)  Classical solution – add regularization, integrability constraints  Most published algorithms are non-convergent [Duron and Maitre 96] regularization

Examples of results Synthetic images Pentland’s method 1994 Tsai and Shah’s method 1994

[Prados ICCV03, ECCV04] reformulated SFS as a well-posed problem Lambertian reflectance Orthographic camera x f(x)f(x) (x,f(x)) Perspective camera (x,- ) f(x)(x,- ) Hamilton-Jacobi equations - no smooth solutions; - require boundary conditions Well posed SFS

Hamilton-Jacobi equations - no smooth solutions; - require boundary conditions Solution 1.Impose smooth solutions – not practical because of image noise 2.Compute viscosity solutions [Lions et al.93] (smooth almost everywhere) still require boundary conditions E. Prados :general framework – characterization viscosity solutions. (based on Dirichlet boundary condition) efficient numerical schemes for orthogonal and perspective camera showed that SFS is a well-posed for a finite light source [Prados ECCV04] Well-posed SFS (2)

Shading: Summary 1.3D subspace 2.Illumination cone: 2. Spherical harmonic representation: + Convex objects 3D subspace Convex cone Linear combination of harmonic imag. (practical 9D basis) Space of all images : Reconstruction : 1.Shape from shading 2.Photometric stereo 3.Uncalibrated photometric stereo One image Unit albedo Known light Multiple imag/1 view Arbitrary albedo Known light + Unknown light Ill-posed + additional constraints GBR ambiguity Family of solutions Lambertian object Distant illumination One view (orthographic) Single light source

Extension to multiple views Problem: PS/SFS one view incomplete object Solution : extension to multiple views – rotating obj., light var. Problem: we don’t know the pixel correspondence anymore Solution: iterative estimation: normals/light – shape initial surface from SFM or visual hull Input images Initial surface Refined surface 1. Kriegman et al ICCV05; Zhang, Seitz … ICCV Cipolla, Vogiatzis ICCV05, CVPR06 SFM Visual hull

Multiview PS+ SFM points [Kriegman et al ICCV05][Zhang, Seitz … ICCV 03] 1. SFM from corresponding points: camera & initial surface (Tomasi Kanade) 2. Iterate: factorize intensity matrix : light, normals, GBR ambiguity Integrate normals Correct GBR using SFM points (constrain surface to go close to points) images Initial surface Integrated surface Rendered Final surface

Multiview PS + frontier points [Cipolla, Vogiatzis ICCV05, CVPR06] 1. initial surface SFS visual hull – convex envelope of the object 2. initial light positions from frontier points plane passing through the point and the camera center is tangent to the object > known normals 3. Alternate photom normals / surface (mesh) v photom normals n surface normals – using the mesh mesh –occlusions, correspondence in I

Multiview PS + frontier points

Stereo [ Assumptions two images Lambertian reflectance textured surfaces] Image info Recover per pixel depth Approach triangulation of corresponding points corresponding points  recovered correlation of small parches around each point  calibrated cameras – search along epipolar lines texture [Birkbeck]

Rectified images

Disparity Z X (0,0) (B,0) Z=fZ=f xlxl xrxr Disparity d d

Correlation scores Point: Calibrated cameras: pixel in I 1 pixel in I 2 Small planar patch: With respect to first image 2. Compensate for illumination change1.Plane parallel with image planes, no illumination variation3. Arbitrary plane

Reflectance equation require: BRDF, light position Image info Approaches 1.Filter specular highlights (brightness, appear at sharp angles) 2.Parametric reflectance 3.Non-parametric reflectance map (discretization of BRDF) 4.Account for general reflectance Helmholz reciprocity [Magda et al ICCV 01, IJCV03] shading+specular highlights Specular surfaces

Shape and Materials by Example [Hertzmann, Seitz CVPR 2003 PAMI 2005] Reconstructs objects with general BRDF with no illumination info. Idea : A reference object from the same material but with known geometry (sphere) is inserted into the scene. Reference images Multiple materials Results

ReflectanceLight+- stereotextured Lambertian Constant [SAD]Rec. texture Rec. depth discont. Complete obj Needs texture Occlusions Varying [NCC] shadinguniform Lamb Constant [SFS]Uniform material Not robust Needs light pose unif/textured Lamb Varying [PS]Unif/varying albedo Do not reconstr depth disc., one view Summary of image cues