November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

Colour From Grey by Optimized Colour Ordering Arash VahdatMark S. Drew School of Computing Science Simon Fraser University.
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Illumination Estimation via Thin Plate Spline Weihua Xiong ( OmniVision Technology,USA ) Lilong Shi, Brian Funt ( Simon Fraser University, Canada) ( Simon.
Computer graphics & visualization Global Illumination Effects.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Color Image Understanding Sharon Alpert & Denis Simakov.
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
IIIT Hyderabad Pose Invariant Palmprint Recognition Chhaya Methani and Anoop Namboodiri Centre for Visual Information Technology IIIT, Hyderabad, INDIA.
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
1 Practical Scene Illuminant Estimation via Flash/No-Flash Pairs Cheng Lu and Mark S. Drew Simon Fraser University {clu,
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
Fast Colour2Grey Ali Alsam and Mark S. Drew The Scientific Department School of Computing Science The National Gallery London Simon Fraser University
ICCV 2003 Colour Workshop 1 Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew 1, Graham D. Finlayson 2, & Steven.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
1 A Markov Random Field Framework for Finding Shadows in a Single Colour Image Cheng Lu and Mark S. Drew School of Computing Science, Simon Fraser University,
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 10: Color Perception. 1 Computational Architectures in Biological.
Digital Cameras CCD (Monochrome) RGB Color Filter Array.
Color Image Understanding Sharon Alpert & Denis Simakov.
Illumination Estimation via Non- Negative Matrix Factorization By Lilong Shi, Brian Funt, Weihua Xiong, ( Simon Fraser University, Canada) Sung-Su Kim,
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
6/23/2015CIC 10, Color constancy at a pixel [Finlayson et al. CIC8, 2000] Idea: plot log(R/G) vs. log(B/G): 14 daylights 24 patches.
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
Colour Constancy T.W. Hung. Colour Constancy – Human A mechanism enables human to perceive constant colour of a surface over a wide range of lighting.
Shadow Removal Using Illumination Invariant Image Graham D. Finlayson, Steven D. Hordley, Mark S. Drew Presented by: Eli Arbel.
Dynamic Range Compression & Color Constancy Democritus University of Thrace
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Colour Image Compression by Grey to Colour Conversion Mark S. Drew 1, Graham D. Finlayson 2, and Abhilash Jindal 3 1 Simon Fraser University 2 University.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
CSS552 Final Project Demo Peter Lam Tim Chuang. Problem Statement Our goal is to experiment with different post rendering effects (Cel Shading, Bloom.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
VINCENT URIAS, CURTIS HASH Detection of Humans in Images Using Skin-tone Analysis and Face Detection.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Computer Science 631 Lecture 7: Colorspace, local operations
Maryam Sadeghi 1,3, Majid Razmara 1, Martin Ester 1, Tim K. Lee 1,2,3 and M. Stella Atkins 1 1: School of Computing Science, Simon Fraser University 2:
Maryam Sadeghi 1,3, Majid Razmara 1, Martin Ester 1, Tim K. Lee 1,2,3 and M. Stella Atkins 1 1: School of Computing Science, Simon Fraser University 2:
1 COMS 161 Introduction to Computing Title: Digital Images Date: November 12, 2004 Lecture Number: 32.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Image-Based Rendering of Diffuse, Specular and Glossy Surfaces from a Single Image Samuel Boivin and André Gagalowicz MIRAGES Project.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
3D Face Recognition Using Range Images
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Local Illumination and Shading
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
03/03/03© 2003 University of Wisconsin Last Time Subsurface scattering models Sky models.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
A Hybrid Strategy For Illuminant Estimation Targeting Hard Images Roshanak Zakizadeh 1 Michael Brown 2 Graham Finlayson 1 1 University of East Anglia 2.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
1 What do color changes reveal about an outdoor scene? KalyanFabianoWojciechToddHanspeter SunkavalliRomeiroMatusikZicklerPfister Harvard University Adobe.
Chapter 10 Computer Graphics
WP3: Visualization services
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Fast Bilateral Filtering for the Display of High-Dynamic-Range Images
Single Image Haze Removal Using Dark Channel Prior
An Algorithm of Eye-Based Ray Tracing on MATLAB
Gradient Domain Salience-preserving Color-to-gray Conversion
Specularity, the Zeta-image, and Information-Theoretic Illuminant
Artistic Rendering Final Project Initial Proposal
Directional Occlusion with Neural Network
Presentation transcript:

November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School of Computer Science Simon Fraser University School of Computer Sciences The University of East Anglia

Outline Motivation Related research Extending the white-patch hypothesis The effect if bright pixels in well-known methods The bright-pixels framework Further experiment Conclusion 2

Motivation White-Patch method One of the first colour constancy methods Estimates the illuminant colour by the max response of three channels Few researchers or commercial cameras use it now Recent research reconsider white patch Local mean calculation as a preprocessing can significantly improve [Choudhury & Medioni (CRICV09)] [Funt & Li (CIC2010)] Analytically, the geometric mean of bright (specular) pixels is the optimal estimate for the illuminant, based on dichromatic model [Drew et al. (CPCV12)] 3

Bright Pixels 4 Light Source Highlights White surface Just a bright surface

Previous Research 5 White Patch Local mean calculation as a preprocessing step for White Patch Using Specular Reflection Specular reflection colour is same as the illumination within a Neutral Interface Reflection It usually includes the bright areas of image Illumination estimation method Intersection of dichromatic planes [Tominaga and Wandell (JOSA89)] Intersection of the lines generates by chromaticity values of pixels of each surface in the CIE chromaticity diagram by [Lee (JOSA86)] Extending Lee’s algorithms by constraint on the colours of illumination

Grey-based illumination estimation Grey-world The average reflectance in the scene is achromatic Shade-of-grey Minkowski p-norm Grey-edge The average of the reflectance differences in a scene is achromatic 6

Extending the White Patch Hypothesis 7 Let us extend white-patch hypothesis that there is always include any of: white patch, specularities, or light source in an image Gamut of bright pixels, in contradistinction to maximum channel response of the White-Patch method, which include the brightest pixels in the image Removing clipped pixels (exceed 90% of the dynamic range) Define bright pixels as the top T % of luminance given by R+G+B. What is the probability of having an image without strong highlights, source of light, or white surface in the real world?

Simple Experiment Experiment whether or not the actual illuminant colour falls inside the 2D gamut of top 5% brightness pixels SFU Laboratory Dataset : 88.16% ColorChecker : 74.47% GreyBall : 66.02% 8 White surface Specularity FAIL

The Effect of Bright Pixels on Grey-base methods 9 ColorChecker Dataset Experiment the effect of bright pixels Run grey-based method for the top 20% brightness pixels in each image, and compare to using all image pixels (colour) Using one fifth of the pixels  performance is better or equal

The Effect of Bright Pixels on Gamut Mapping method White-patch gamut and canonical white-patch gamut introduced [Vaezi Joze & Drew (ICIP12)] White-patch gamut is the gamut of top 5% bright pixels in an image 10 Adding new constraints based on the white-patch gamut to standard Gamut Mapping constraints outperforms the Gamut Mapping method and its extensions. Canonical gamut vs. WP canonical gamut

The Bright-Pixels Framework If these bright pixels represent highlights, a white surface, or a light source, they approximate the colour of the illuminant Try Mean, Median, Geomean, p-norm (p=2,p=4) for top T% brightness 11

The Bright-Pixels Framework A local mean calculation can help: Resizing to 64 × 64 pixels by bicubic interpolation Median filtering Gaussian blurring filter It does not help so much on these images 12 ColorChecker Dataset

Dataset 1. SFU Laboratory [Barnard & Funt (CRA02)] 321 images under 11 different measured illuminants 1. Reprocessed version of ColorChecker [Gehler et al. (CVPR08)] 568 images, both indoor and outdoor 2. GreyBall [Cieurea & Funt (CIC03)] images extracted from video recorded under a wide variety of imaging conditions 3. HDR dataset [Funt et al. (2010)] 105 HDR images 13

The Bright-Pixels Method Remove clipped pixels 2. Do local mean {no, Median, Gaussian, Bicubic } 3. Select top T% brightness pixels Threshold = {.5%,1%,2%,5%,10%} 4. Estimate illuminant by shade of grey eq. p = {1,2,4,8} 5. if the estimated illuminant is not in the possible illuminant gamut use grey-edge

Further Experiment Comparison with well-known colour constancy methods 15

Optimal parameters 16 pTblurring SFU Laboratory Dataset2.5 % no Color Checker Dataset22% Gaussian GreyBall Dataset21% no HDR Dataset81%Gaussian Gaussian for high resolution images and no blurring for lower resolution images Even.5% threshold is enough for in-laboratory images, for real images threshold should be 1-2%

Conclusion 17 Based on current datasets in the field we saw that the simple idea of using the p-norm of bright pixels, after a local mean preprocessing step, can perform surprisingly competitively to complex methods. Either the probability of having an image without strong highlights, source of light, or white surface in the real world is not overwhelmingly great or the current color constancy datasets are conceivably not good indicators of performance with regard to possible real world images.

Questions? Thank you. 18