1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

Land’s Retinex algorithm
What do color changes reveal about an outdoor scene? Kalyan Sunkavalli Fabiano Romeiro Wojciech Matusik Todd Zickler Hanspeter Pfister Harvard University.
Intrinsic Images by Entropy Minimization (Midway Presentation, by Yingda Chen) Graham D. Finlayson, Mark S. Drew and Cheng Lu, ECCV, Prague, 2004.
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Digital Photography with Flash and No-Flash Image Pairs By: Georg PetschniggManeesh Agrawala Hugues HoppeRichard Szeliski Michael CohenKentaro Toyama,
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
Lilong Shi and Brian Funt School of Computing Science, Simon Fraser University, Canada.
Color Image Processing
Light, Surface and Feature in Color Images Lilong Shi Postdoc at Caltech Computational Vision Lab, Simon Fraser University.
Illumination Model & Surface-rendering Method 박 경 와.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
Fast Colour2Grey Ali Alsam and Mark S. Drew The Scientific Department School of Computing Science The National Gallery London Simon Fraser University
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 Automatic Compensation for Camera Settings for Images Taken under Different Illuminants Cheng Lu and Mark S. Drew Simon Fraser University {clu,
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
Modelling, calibration and rendition of colour logarithmic CMOS image sensors Dileepan Joseph and Steve Collins Department of Engineering Science University.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Sensor Transforms to Improve Metamerism-Based Watermarking Mark S. Drew School of Computing Science Simon Fraser University Raja Bala Xerox.
Light Mixture Estimation for Spatially Varying White Balance
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 10: Color Perception. 1 Computational Architectures in Biological.
IMGD 1001: Illumination by Mark Claypool
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
6/23/2015CIC 10, Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches.
1 Lecture 11 Scene Modeling. 2 Multiple Orthographic Views The easiest way is to project the scene with parallel orthographic projections. Fast rendering.
Deriving Intrinsic Images from Image Sequences Mohit Gupta Yair Weiss.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Chapter2 Image Formation Reading: Szeliski, Chapter 2.
Automatic Compensation for Camera Settings for Images Taken under Different Illuminants School of Electrical Engineering and Computer Science Kyungpook.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Why is computer vision difficult?
CS6825: Color 2 Light and Color Light is electromagnetic radiation Light is electromagnetic radiation Visible light: nm. range Visible light:
A survey of Light Source Detection Methods Nathan Funk University of Alberta Nov
PERCEPTUAL STRATEGIES FOR MATERIAL IDENTIFICATION Qasim Zaidi Rocco Robilotto Byung-Geun Khang SUNY College of Optometry.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Transparent Object Reconstruction via Coded Transport of Intensity Supplemental Video Paper ID: 846.
Local Illumination and Shading
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
Computer Graphics Lecture 30 Mathematics of Lighting and Shading - IV Taqdees A. Siddiqi
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
Computer Graphics: Illumination
1 What do color changes reveal about an outdoor scene? KalyanFabianoWojciechToddHanspeter SunkavalliRomeiroMatusikZicklerPfister Harvard University Adobe.
White Balance under Mixed Illumination using Flash Photography
Color Image Processing
Color Image Processing
Color Image Processing
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Merle Norman Cosmetics, Los Angeles
Mingjing Zhang and Mark S. Drew
Color Image Processing
Human Perception 9: Color.
What do you see in this image?
Illumination and Shading
COLOR CONSTANCY IN THE COMPRESSED DOMAIN
Specularity, the Zeta-image, and Information-Theoretic Illuminant
Presentation transcript:

1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,

2 Flash/No-flash Imagery – a Brief History diCarlo, Xiao, & Wandell, CIC 2001 Combine flash/no-flash images to produce a pure-flash image.  Use dim=3 FDM + knowledge of flash SPD and sensor curves to estimate surface reflectance  most likely ambient illuminant Raskar et al., Non-Realistic Rendering 2004 Filling in night-time imagery with daytime image info. Copy edges from cloned image region into edge-map of target background; re- integrate. Blake et al., Poisson Image Editing, Siggraph 2004 Szeliski et al., Siggraph 2004 Transfer lower-noise information from flash image to higher-noise ambient-light image. Find shadow-mask, copy edges inside shadow from flash image into ambient image, re-integrate. Drew,Lu,Finlayson, Removing Shadows using Flash/Noflash Image Edges, ICME 2006

3 This paper: Estimate Ambient Illuminant, using Flash/No-flash Pairs Like diCarlo&Wandell approach, but replace knowledge of camera sensor curves with a camera RGB-based calibration using difference of with-flash and no-flash images. How? - Spectral sharpening - Subtract “both” – “no-flash”  pure-flash image - Log’s - Project difference of flash minus ambient into geometric-mean chromaticity color space  Calibrate such to get illuminant chromaticity.

4 What’s the point?: Can estimate scene (ambient) illuminant without knowing: - Flash SPD - Camera sensors - Surface reflectance

5 Why estimate the illuminant? White balance, plus many computer vision applications == intrinsic images without illumination. - Simple - Fast What’s good about this method?

6 The set-up: 2 images, one under ambient lighting, & another under flash. Under Ambient: Image “A”.Under Both: Image “B”. +

7 The Key: Pure-Flash Image  The ambient light from “A” is also in “B”.  Therefore if we subtract the two, we have “F”: the pure-flash image. Under Flash: Image “F”: + - = ) (

8 Incidentally, note that there are now extra shadows, from the flash (since it’s offset from the lens). Image “F”: the scene as imaged under Flash light only.

9 1. Lambertian surface: RGB = Shading = normal  effective light-direction Illuminant Surface Sensors Simple Image Formation Model  will guide us. Assumptions: 1., 2., 3.

10 2. Narrow-band sensors: so then is exactly a single-spike sensor:

11 3. Planckian light: But, can violate 1., 2., 3. and still succeed. (in Wien’s approximation) Gives

12 -Now take Log’s, to pull apart multiplications: Camera-dep’t vector Intensity and shading Surface Color- temperature of light where

13 Camera- dep’t vector Surface Color- temperature of light So form geometric-mean chromaticity: -We’d like to remove intensity/shading term: In logs: where

14 -The point: As temp (light color) changes, move along straight line. -But, we have “A” and “F” images:  Subract them, and use same chromaticity trick  Only illumination is left!

15 Log-difference Geometric-Mean Chromaticity  So log-log delivers inverse-temperature difference: -Calibrate for 1/T A -1/T F, then in new scene obtain T A ! {

16 What does this look like? Moved to 2D; color-matching functions in geo-mean chromaticity. (9 Planckians, Macbeth ColorChecker, spike sensors, xenon flash SPD)

17 Sony DXC930 sensors, Daylights+F2, actual xenon flash SPD: “Reference locus” How to proceed: -Sharpen - Find closest cluster

18 Effect of sharpening: Poor clusters Better clusters  #’ing Kodak DCS420:

19 Test: can we determine the illuminant? 102 illuminants, Sony camera, Munsell patches 102 illuminants, Sony camera, Macbeth patches Estimate illum. from Munsell to Macbeth  Nearly 100% correctly identified.

20 Application: White Balance 4 calibration illuminants, HP camera, Macbeth chart (each cluster has 24 dots) No flash  With flash  - Sharpen - Sample image at 24 locations evenly over image -Same (“daylight”) color balance for training and for testing Image under CWF; CWF+Xenon

21 Overlaps best with CWF, so use white patch of Macbeth under CWF for white balance: “Auto” balance – Wrong. “Fluor” balance – Correct. Our color-balance– Much closer.

22 Thanks! To Natural Sciences and Engineering Research Council of Canada