Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

Matrix M contains images as rows. Consider an arbitrary factorization of M into A and B. Four interpretations of factorization: a)Rows of B as basis images.
Intrinsic Images by Entropy Minimization (Midway Presentation, by Yingda Chen) Graham D. Finlayson, Mark S. Drew and Cheng Lu, ECCV, Prague, 2004.
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Scene illumination and surface albedo recovery via L1-norm total variation minimization Hong-Ming Chen Advised by: John Wright.
Color Imaging Analysis of Spatio-chromatic Decorrelation for Colour Image Reconstruction Mark S. Drew and Steven Bergner
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Computer graphics & visualization Global Illumination Effects.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
3D Modeling from a photograph
Color Image Understanding Sharon Alpert & Denis Simakov.
Introduction to 3D Graphics Lecture 1: Illusions and the Fine Art of Approximation Anthony Steed University College London.
Lilong Shi and Brian Funt School of Computing Science, Simon Fraser University, Canada.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 Automatic Compensation for Camera Settings for Images Taken under Different Illuminants Cheng Lu and Mark S. Drew Simon Fraser University {clu,
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
Modelling, calibration and rendition of colour logarithmic CMOS image sensors Dileepan Joseph and Steve Collins Department of Engineering Science University.
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Sensor Transforms to Improve Metamerism-Based Watermarking Mark S. Drew School of Computing Science Simon Fraser University Raja Bala Xerox.
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
Color Image Understanding Sharon Alpert & Denis Simakov.
Illumination Estimation via Non- Negative Matrix Factorization By Lilong Shi, Brian Funt, Weihua Xiong, ( Simon Fraser University, Canada) Sung-Su Kim,
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
6/23/2015CIC 10, Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Shadow Removal Seminar
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Capturing light Source: A. Efros.
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Local Illumination and Shading
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Render methods. Contents Levels of rendering Wireframe Plain shadow Gouraud Phong Comparison Gouraud-Phong.
Colorization is a user-assisted color manipulation mechanism for changing grayscale images into colored ones. Surprisingly, only a fairly sparse set of.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
Announcements Project 3a due today Project 3b due next Friday.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Color Image Processing
Color Image Processing
Color Image Processing
Exposing Digital Forgeries Through Chromatic Aberration Micah K
Merle Norman Cosmetics, Los Angeles
What Is Spectral Imaging? An Introduction
Mingjing Zhang and Mark S. Drew
Color Image Processing
Announcements Project 3b due Friday.
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Lecture 28: Photometric Stereo
Announcements Project 3 out today demo session at the end of class.
COLOR CONSTANCY IN THE COMPRESSED DOMAIN
Specularity, the Zeta-image, and Information-Theoretic Illuminant
Shape from Shading and Texture
Physical Problem of Simultaneous Linear Equations in Image Processing
Presentation transcript:

Multispectral Image Invariant to Illumination Colour, Strength, and Shading Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science, Simon Fraser University, Vancouver, BC, Canada {mark/ayazdani}@cs.sfu.ca

Table of Contents Introduction RGB Illumination Invariant Multispectral Image Formation Synthetic Multispectral Images Measured Multispectral Images Conclusion

New  Introduction Invariant Images – RGB: Information from one pixel, with calibration Information from all pixels – use entropy New  Multispectral data: Information from one pixel without calibration, but knowledge of narrowband sensors peak wavelengths

RGB Illumination Invariant Removing Shadows from Images, ECCV 2002 Graham Finlayson, Steven Hordley, and Mark Drew 4

An example, with delta function sensitivities RGB… An example, with delta function sensitivities B W R Y G P Narrow-band (delta-function sensitivities) Log-opponent chromaticities for 6 surfaces under 9 lights

Deriving the Illuminant Invariant RGB… Deriving the Illuminant Invariant Log-opponent chromaticities for 6 surfaces under 9 lights Rotate chromaticities This axis is invariant to illuminant colour

An example with real camera data RGB… An example with real camera data Normalized sensitivities of a SONY DXC-930 video camera Log-opponent chromaticities for 6 surfaces under 9 different lights

Deriving the invariant RGB… Deriving the invariant Log-opponent chromaticities Rotate chromaticities The invariant axis is now only approximately illuminant invariant (but hopefully good enough)

Image Formation Multispectral Illumination : motivate using theoretical assumptions, then test in practice Planck’s Law in Wien’s approximation: Lambertian surface S(), shading is , intensity is I Narrowband sensors qk(), k=1..31, qk()=(-k) Specular: colour is same as colour of light (dielectric):

Multispectral Image Formation … To equalize confidence in 31 channels, use a geometric-mean chromaticity: Geometric Mean Chromaticity:  with

Multispectral Image Formation … surface-dependent sensor-dependent illumination-dependent So take a log to linearize in (1/T) ! 11

Multispectral Image Formation … Logarithm: known because, in special case of multispectral, *know* k !  Only sensor-unknown is ! ( spectral-channel gains)

Multispectral Image Formation … If we could identify at least one specularity, we could recover log k ?? Nope, no pixel is free enough of surface colour . So (without a calibration) we won’t get log k, but instead it will be the origin in the invariant space. Note: Effect of light intensity and shading removed: 31D  30-D Now let’s remove lighting colour too: we know 31- vector (ek – eM)  (-c2/k - c2/M) Projection  to (ek – eM) removes effect of light, 1/T : 30D  29-D

Algorithm: Form 31-D chromaticity k Take log Project  to (ek – eM) using projector Pe

What’s different from RGB?  Algorithm: What’s different from RGB?  For RGB have to get “lighting-change direction” (ek – eM) either from calibration, or internal evidence (entropy) in the image. For multispectral, we know (ek – eM) !

First, consider synthetic images, for understanding: Surfaces: 3 spheres, reflectances from Macbeth ColorChecker Camera: Kodak DSC 420 31 sensor gains qk() Carry out all in 31-D, but show as camera would see it.

Synthetic Images shading, for light 1, for light 2 Under blue light, P10500 Under red light, P2800

Synthetic Images Original: not invariant Spectral invariant

Measured Multispectral Images Under D75 Under D48 Invt. #1 Invt. #2

Measured Multispectral Images After invt. processing In-shadow, In-light

Measured Multispectral Images

Measured Multispectral Images

Measured Multispectral Images

Next: removing shadows from Conclusion A novel method for producing illumination invariant, multispectral image Successful in removing effects of Illuminant strength, colour, and shading Next: removing shadows from remote-sensing data.

Funding: Natural Sciences and Engineering Research Council of Canada Thanks! Funding: Natural Sciences and Engineering Research Council of Canada Multispectral Images Invariant to Illumination Colour, Strength and Shading