ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

Land’s Retinex algorithm
Intrinsic Images by Entropy Minimization (Midway Presentation, by Yingda Chen) Graham D. Finlayson, Mark S. Drew and Cheng Lu, ECCV, Prague, 2004.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Dynamic Occlusion Analysis in Optical Flow Fields
Simon Fraser University Computational Vision Lab Lilong Shi, Brian Funt and Tim Lee.
Approaches for Retinex and Their Relations Yu Du March 14, 2002.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right.
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Shadow removal algorithms Shadow removal seminar Pavel Knur.
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 10: Color Perception. 1 Computational Architectures in Biological.
Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video Kaleigh SmithPierre-Edouard Landes Joelle Thollot Karol Myszkowski.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Image Forgery Detection by Gamma Correction Differences.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
6/23/2015CIC 10, Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches.
Deriving Intrinsic Images from Image Sequences Mohit Gupta Yair Weiss.
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
A Gentle Introduction to Bilateral Filtering and its Applications Limitation? Pierre Kornprobst (INRIA) 0:20.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Computer vision.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Perception Introduction Pattern Recognition Image Formation
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Math 3360: Mathematical Imaging Prof. Ronald Lok Ming Lui Department of Mathematics, The Chinese University of Hong Kong Lecture 1: Introduction to mathematical.
Deriving Intrinsic Images from Image Sequences Mohit Gupta 04/21/2006 Advanced Perception Yair Weiss.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Week 4 Functions and Graphs. Objectives At the end of this session, you will be able to: Define and compute slope of a line. Write the point-slope equation.
Reflectance Function Estimation and Shape Recovery from Image Sequence of a Rotating object Jiping Lu, Jim Little UBC Computer Science ICCV ’ 95.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
ECE 638: Principles of Digital Color Imaging Systems Lecture 5: Primaries.
1 Edge Operators a kind of filtering that leads to useful features.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Coupled Horn-Schunck and Lukas-Kanade for image processing
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Presentation transcript:

ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2 School of Computer Science, Simon Fraser University, Canada

ECCV 2002 Overview Introduction Shadow Free Grey-scale images - Illuminant Invariance at a pixel Shadow Free Colour Images - Removing shadow edges using illumination invariance - Re-integrating edge maps Results and Future Work

ECCV 2002 The Aim: Shadow Removal We would like to go from a colour image with shadows, to the same colour image, but without the shadows.

ECCV 2002 Why Shadow Removal? For Computer Vision - improved object tracking, segmentation etc. For Image Enhancement - creating a more pleasing image For Scene Re-lighting - to change for example, the lighting direction

ECCV 2002 What is a shadow? Region Lit by Sunlight and Sky-light Region Lit by Sky-light only A shadow is a local change in illumination intensity and (often) illumination colour.

ECCV 2002 Removing Shadows So, if we can factor out the illumination locally (at a pixel) it should follow that we remove the shadows. So, can we factor out illumination locally? That is, can we derive an illumination-invariant colour representation at a single image pixel? Yes, provided that our camera and illumination satisfies certain restrictions ….

ECCV 2002 Conditions for Illumination Invariance (1) If sensors can be represented as delta functions (they respond only at a single wavelength) (2) and illumination is restricted to the Planckian locus (3) then we can find a 1-D co-ordinate, a function of image chromaticities, which is invariant to illuminant colour and intensity (4) this gives us a grey-scale representation of our original image, but without the shadows (it takes us a third of the way to the goal of this talk!)

ECCV 2002 Image Formation Camera responses depend on 3 factors: light (E), surface (S), and sensor (R, G, B)

ECCV 2002 G( ) Sensitivity B( )R( ) = Delta functions “select” single wavelengths: Using Delta Function Sensitivities

ECCV 2002 Characterising Typical Illuminants Most typical illuminants lie on, or close to, the Planckian locus (the red line in the figure) So, let’s represent illuminants by their equivalent Planckian black-body illuminants...

ECCV 2002 Here I controls the overall intensity of light, T is the temperature, and c 1, c 2 are constants Planckian Black-body Radiators But, for typical illuminants, c 2 >> T. So, Planck’s eqn. is approximated as:

ECCV 2002 How good is this approximation? 2500 Kelvin Kelvin 5500 Kelvin

ECCV 2002 For, delta function sensors and Planckian Illumination we have: Back to the image formation equation SurfaceLight Or, taking the log of both sides...

ECCV 2002 Where subscript s denotes dependence on reflectance and k,a,b and c are constants. T is temperature. Summarising for the three sensors Constant independent of sensor Variable dependent only on reflectance Variable dependent on illuminant

ECCV 2002 That is: there exists a weighted difference of log-opponent chromaticities that depends only on surface reflectance Factoring out the illumination First, let’s calculate log-opponent chromaticities: Then, with some algebra, we have:

ECCV 2002 An example - delta function sensitivities B W R Y G P Narrow-band (delta-function sensitivities) Log-opponent chromaticities for 6 surfaces under 9 lights

ECCV 2002 Deriving the Illuminant Invariant Log-opponent chromaticities for 6 surfaces under 9 lights This axis is invariant to illuminant colour Rotate chromaticities

ECCV 2002 Normalized sensitivities of a SONY DXC-930 video camera A real example with real camera data Log-opponent chromaticities for 6 surfaces under 9 different lights

ECCV 2002 Deriving the invariant Log-opponent chromaticities for 6 surfaces under 9 different lights The invariant axis is now only approximately illuminant invariant (but hopefully good enough) Rotate chromaticities

ECCV 2002 Some Examples

ECCV 2002 A Summary So Far With certain restrictions, from a 3-band colour image we can derive a 1-d grey-scale image which is: - illuminant invariant - and so, shadow free

ECCV 2002 What’s left to do? To complete our goal we would like to go back to a 3- band colour image, without shadows We will look next at how the invariant representation can help us to do this...

ECCV 2002 Looking at edge information Consider an edge map of the colour image... And an edge map of the 1-d invariant image... These are approximately the same, except that the invariant edge map has no shadow edges

ECCV 2002 Removing Shadow Edges From these two edge maps we can remove shadow edges thus: Edges =  I orig &  I inv (Valid edges are in the original image, and in the invariant image)

ECCV 2002 Using Shadow Edges So, now we have the edge map of the image we would like to obtain (edge map of the original image with shadows edges set to zero) So, can we go from this edge information back to the image we want? (can we re-integrate the edge information?).

ECCV 2002 Re-integrating Edge Information Of course, re-integrating a single edge map will give us a grey-scale image: So, we must apply any procedure to each band of the colour image separately: Red Green Blue Original Colour Channels Edge Maps of Channels Shadow Edges Removed Re-integrated

ECCV 2002 Re-Integrating Edge Information The re-integration problem has been studied by a number of researchers: - Horn - Blake et al - Weiss ICCV ‘01 (Least-Squares) Land et al (Retinex) The aim is typically to derive a reflectance image from an image in which illumination and reflectance are confounded.

ECCV 2002 Weiss’ Method Weiss used a sequence of time varying images of a fixed scene to determine the reflectance edges of the scene His method works by determining, from the image sequence, edges which correspond to a change in reflectance (Weiss’ definition of a reflectance edge is an edge which persists throughout the sequence) Given reflectance edges, Weiss re-integrates the information to derive a reflectance image In our case, we can borrow Weiss’ re-integration procedure to recover our shadow-free image.

ECCV 2002 Re-integrating Edge Information Let I j (x,y) represent the log of a single band of a colour image  x is the derivative operator in the x direction  y is the derivative operator in the y direction We first calculate: T is the operator that sets shadow edges to zero This summarises the process of detecting and removing shadow edges

ECCV 2002 Re-integrating Edge Information To recover the shadow free, image we want to invert this Equation To do this, we first form the Poisson Equation We solve this (subject to Neumann boundary conditions) as follows:

ECCV 2002 Re-integrating Edge Information We solve by applying the inverse Laplacian Note: the inverse operator has no Threshold Applying this process to each of the three channels recovers a log image without shadows.

ECCV 2002 A Summary of Re-integration 1. I orig = Original colour image, I inv = Invariant image 2. For j=1,2,3 I j orig = jth band of I orig 3. Remove Shadow Edges: Edges =  I j orig &  I inv 4. Differentiate the thresholded edge map 5. Re-integrate the image 6. Goto 3

ECCV 2002 Some Remarks The re-integration step is unique up to an additive constant (a multiplicative constant in linear image space Fixing this constant amounts to applying a correction for illumination colour to the image. Thus we choose suitable constants to correct for the prevailing scene illuminant In practice, the method relies upon having an effective thresholding step T, that is, on effectively locating the shadow edges. As we will see, our shadow edge detection is not yet perfect

ECCV 2002 Shadow Edge Detection The Shadow Edge Detection consists of the following steps: 1. Edge detect a smoothed version of the original (by channel) and the invariant images 2. Threshold to keep strong edges in both images 3. Shadow Edge = Edge in Original & NOT in Invariant 4. Applying a suitable Morphological filter to thicken the edges resulting from step 3. Canny or SUSAN This typically identifies the shadow edges plus some false edges

ECCV 2002 An Example Detected Shadow Edges Original Image Invariant Image Shadow Removed

ECCV 2002 A Second Example Detected Shadow Edges Original Image Invariant Image Shadow Removed

ECCV 2002 More Examples Detected Shadow Edges Original Image Invariant Image Shadow Removed

ECCV 2002 More Examples Detected Shadow Edges Original Image Invariant Image Shadow Removed

ECCV 2002 A Summary We have presented a method for removing shadows from images The method uses an illuminant invariant 1-d image representation to identify shadow edges From the shadow free edge map we re-integrate to recover a shadow free colour image Initial results are encouraging: we are able to remove shadows, even when shadow edge definition is not perfect

ECCV 2002 Future Work We are currently investigating ways to more reliably identify shadow edges... … or to derive a re-integration which is more robust to errors (Retinex?) Currently deriving the illuminant invariant image requires some knowledge of the capture device’s characteristics - We show in the paper how to determine these characteristics empirically and we are working on making this process more robust

ECCV 2002 Acknowledgements The authors would like to thank Hewlett-Packard Incorporated for their support of this work.