Michael Grossberg and Shree Nayar CAVE Lab, Columbia University Partially funded by NSF ITR Award What can be Known about the Radiometric Response from.

Slides:



Advertisements
Similar presentations
Photometric Image Formation
Advertisements

A Projective Framework for Radiometric Image Analysis CVPR 2009 Date: 2010/3/9 Reporter : Annie lin.
Fast Separation of Direct and Global Images Using High Frequency Illumination Shree K. Nayar Gurunandan G. Krishnan Columbia University SIGGRAPH Conference.
Micro Phase Shifting Mohit Gupta and Shree K. Nayar Computer Science Columbia University Supported by: NSF and ONR.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
16421: Vision Sensors Lecture 6: Radiometry and Radiometric Calibration Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 2:50pm.
Basic Principles of Surface Reflectance
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
Course Review CS/ECE 181b Spring Topics since Midterm Stereo vision Shape from shading Optical flow Face recognition project.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Radiometric Self Calibration
Uncalibrated Geometry & Stratification Sastry and Yang
What is the Space of Camera Responses? Michael Grossberg and Shree Nayar CAVE Lab, Columbia University IEEE CVPR Conference June 2003, Madison, USA Partially.
High Dynamic Range Images : Computational Photography Alexei Efros, CMU, Fall 2006 …with a lot of slides stolen from Paul Debevec and Yuanzhen Li,
High Dynamic Range from Multiple Images: Which Exposures to Combine?
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.
Basic Principles of Surface Reflectance
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Introduction to Computer Vision CS223B, Winter 2005.
General Imaging Model Michael Grossberg and Shree Nayar CAVE Lab, Columbia University ICCV Conference Vancouver, July 2001 Partially funded by NSF ITR.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
A New Correspondence Algorithm Jitendra Malik Computer Science Division University of California, Berkeley Joint work with Serge Belongie, Jan Puzicha,
Photometric Stereo & Shape from Shading
Projector with Radiometric Screen Compensation Shree K. Nayar, Harish Peri Michael Grossberg, Peter Belhumeur Support: National Science Foundation Computer.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Light and shading Source: A. Efros.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Reflectance Map: Photometric Stereo and Shape from Shading
Camera Geometry and Calibration Thanks to Martial Hebert.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
High Dynamic Range Images cs195g: Computational Photography James Hays, Brown, Spring 2010 slides from Alexei A. Efros and Paul Debevec.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Capturing light Source: A. Efros.
Inner Camera Invariants and Their Applications M. WermanS. Bannerjee Maolin QiuS. Dutta Roy.
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
High Dynamic Range Images : Computational Photography Alexei Efros, CMU, Spring 2010 …with a lot of slides stolen from Paul Debevec © Alyosha Efros.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
A Theory for Photometric Self-Calibration of Multiple Overlapping Projectors and Cameras Peng Song Tat-Jen Cham Centre for Multimedia & Network Technology.
Affine Structure from Motion
Panorama artifacts online –send your votes to Li Announcements.
High dynamic range imaging Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/8 with slides by Fedro Durand, Brian Curless, Steve Seitz and Alexei.
Ning Sun, Hassan Mansour, Rabab Ward Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong HDR Image.
3D reconstruction from uncalibrated images
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Determining the Camera Response Function Short Presentation Dominik Neumann Chair of Pattern Recognition (Computer Science 5) Friedrich-Alexander-University.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Multiresolution Histograms and their Use for Texture Classification Stathis Hadjidemetriou, Michael Grossberg and Shree Nayar CAVE Lab, Columbia University.
Announcements Project 3a due today Project 3b due next Friday.
Instant Dehazing of Images using Polarization
Radiometric Preprocessing: Atmospheric Correction
Uncontrolled Modulation Imaging
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Capturing light Source: A. Efros.
High Dynamic Range Images
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Depthmap Reconstruction Based on Monocular cues
Lecture 28: Photometric Stereo
Presentation transcript:

Michael Grossberg and Shree Nayar CAVE Lab, Columbia University Partially funded by NSF ITR Award What can be Known about the Radiometric Response from Images? ECCV Conference May, 2002, Copenhagen, Denmark

Radiometric Response Function Response: u Inverse response function: g g(u)=I Response function: f(I)=u Response = Gray-level Irradiance I u Image Plane Irradiance: I 0255 Scene Radiance: R

Radiometric Calibration Critical for Photometric Applications Example: Photometric Stereo 3D Structure Changes Scene Radiance Lighting Changes + 3D structure Changes Image Irradiance Changes Image Brightness Reveals Radiometric Response f Cannot recover 3D structure without g Inverse Response g

Response Recovery from Images What is measured?What is needed?What is recovered? Images at different exposures Correspondence of gray- levels between images Inverse Radiometric Response, g Exposure Ratios k 3 k 1 k 2 k 1 k 3 k 2 Response Irradiance u I Exposure Ratios Gray-levels: Image A Gray-levels: Image B Gray-levels: Image C Gray-levels: Image D uAuA uBuB Recovery Algorithms: S. Mann and R. Picard, 1995, P. E. Debevec, and J. Malik, 1997, T. Mitsunaga S. K. Nayar 1999, S. Mann 2001, Y. Tsin, V. Ramesh and T. Kanade 2001

How is Radiometric Calibration Done? Recovery Algorithms Response Irradiance u I k 1 k 3 k 2 Images at Different ExposuresCorresponding Gray-levelsInverse Response g, Exposure Ratio k Geometric Correspondences We eliminate the need for geometric correspondences: Static Scenes Dynamic Scenes We find: All ambiguities in recovery Assumptions that break them

Constraint Equations Constraint on irradiance I:  I B = kI A Constraint on g:  g(u B )=kg(u A ) T IBIB IAIA Filter Brighter image Darker image g(T(u A ))=kg(u A ) Brightness Transfer Function T:  u B =T(u A ) Constraint on g in terms of T

How Does the Constraint Apply? Exposure ratio k known Constraint makes curve self-similar

Self-Similar Ambiguity: Can We Recover g? Conclusions:  Constraint gives no information in [T -1 (1),1]  Regularity assumptions break ambiguity  Known k: only Self-similar ambiguity Gray-levels Irradiance 01T -1 (1) Choose anything here 1 and copy 1/k 1/k 2 1/k 3 u I

Exponential Ambiguity: Can We Recover g and k ? Exposure ratio Inverse Response Function g γ Brightness Transfer Function T Response Irradiance Gray-level Image A Gray-level Image B γ =1/3 γ =1/2 γ =1 γ =2 γ =3 I k=2 1/2 k=2 1/3 k=2 k=2 2 k=2 3 T(M)=2M T(u) = g -1 (kg(u)) = g - γ (k - γ g γ (u)) = T(u) We cannot disambiguate (g γ, k γ ) from (g, k) using T! U

Ambiguity with Multiple Images Inverse Response: gInverse Response: g γ k1γk1γ k2k2 k 1 γ k 2 γ =(k 1 k 2 ) γ k2γk2γ k1k1 k1k2k1k2 Conclusions Cannot recover both k and g, without making assumptions on g Exponential and self-similar ambiguities complete ambiguities of recovery

Direct Recovery of Exposure Ratios Brightness in Image A Brightness in Image B T 2 (u)=g -1 (4g(u)) T 3 (u)=g -1 (8g(u)) T 1 (u)=g -1 (2g(u)) g(u)=2u/(1+u) k 3 =8=T 3 ′(0) k 1 =2=T 1 ′(0) k 2 =4=T 2 ′(0) k = T ′(0) when g′(0)≠0

Obtaining the Brightness Transfer Function (S. Mann, 2001) Registered Static Images at Different Exposures 2D-Gray-level Histogram Brightness Transfer Function Regression Scenes must be static. Gray-level Image A Gray-level Image B Gray-level Image A Gray-level Image B

Brightness Transfer Function Histogram Specification Brightness HistogramsUnregistered Images at Different Exposures Scenes may have motion. Brightness Transfer Function without Registration Gray-level Image A Gray-level Image B Gray-level Image A Gray-level Image B

How does Histogram Specification Work? Gray-levels in Image A Cumulative Area (Fake Irradiance) Histogram Equalization Gray-levels in Image B Histogram Specification = Brightness Transfer Function Histogram Specification

Why Does Histogram Specification = Brightness Transfer Function? Image A Image B Area of intensities ≤ u A in image A = Area of intensities ≤ u B in image B  u B =T(u A )

Results: Object Motion Recovered Inverse Radiometric Response Curves Red Response Irradiance Green Response Recovered Response Macbeth Chart Data Irradiance Blue Response Irradiance

Results: Camera Motion Green Irradiance Blue Irradiance Red Irradiance Red Response Blue Response Recovered Inverse Radiometric Response Curves Green Response Recovered Response Macbeth Chart Data Irradiance

Results: Object and Camera Motion Green Irradiance Blue Irradiance Red Irradiance Red Response Blue Response Green Response Recovered Inverse Radiometric Response Curves Recovered Response Macbeth Chart Data Irradiance

Conclusions: What can be Known about Inverse Response g from Images? Recovery of g from T Self-similar Ambiguity + Exponential Ambiguity Need assumptions on g and k to recover g Exposure ratio k known Exposure ratio k unknown A2: In theory, we can recover exposure ratio directly from Brightness Transfer Function T A3: Geometric correspondence step eliminated allowing recovery in dynamic scenes: A1: