A Hybrid Strategy For Illuminant Estimation Targeting Hard Images Roshanak Zakizadeh 1 Michael Brown 2 Graham Finlayson 1 1 University of East Anglia 2.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Illumination Estimation via Thin Plate Spline Weihua Xiong ( OmniVision Technology,USA ) Lilong Shi, Brian Funt ( Simon Fraser University, Canada) ( Simon.
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola, Michael Jones Conference on Computer Vision and Pattern Recognition 2001.
Learning Techniques for Video Shot Detection Under the guidance of Prof. Sharat Chandran by M. Nithya.
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Light, Surface and Feature in Color Images Lilong Shi Postdoc at Caltech Computational Vision Lab, Simon Fraser University.
Face detection Many slides adapted from P. Viola.
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
School of Computer Science Simon Fraser University November 2009 Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image Mark.
Digital Cameras CCD (Monochrome) RGB Color Filter Array.
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
Color Image Understanding Sharon Alpert & Denis Simakov.
Illumination Estimation via Non- Negative Matrix Factorization By Lilong Shi, Brian Funt, Weihua Xiong, ( Simon Fraser University, Canada) Sung-Su Kim,
1 Integrating User Feedback Log into Relevance Feedback by Coupled SVM for Content-Based Image Retrieval 9-April, 2005 Steven C. H. Hoi *, Michael R. Lyu.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
WORD-PREDICTION AS A TOOL TO EVALUATE LOW-LEVEL VISION PROCESSES Prasad Gabbur, Kobus Barnard University of Arizona.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Spatial Pyramid Pooling in Deep Convolutional
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Foundations of Computer Vision Rapid object / face detection using a Boosted Cascade of Simple features Presented by Christos Stoilas Rapid object / face.
Face Detection CSE 576. Face detection State-of-the-art face detection demo (Courtesy Boris Babenko)Boris Babenko.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Matthew Brown University of British Columbia (prev.) Microsoft Research [ Collaborators: † Simon Winder, *Gang Hua, † Rick Szeliski † =MS Research, *=MS.
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
Face Alignment Using Cascaded Boosted Regression Active Shape Models
A Tutorial on Object Detection Using OpenCV
November 2012 The Role of Bright Pixels in Illumination Estimation Hamid Reza Vaezi Joze Mark S. Drew Graham D. Finlayson Petra Aurora Troncoso Rey School.
Implementing Query Classification HYP: End of Semester Update prepared Minh.
Lecture 29: Face Detection Revisited CS4670 / 5670: Computer Vision Noah Snavely.
BING: Binarized Normed Gradients for Objectness Estimation at 300fps
A Comparative Evaluation of Three Skin Color Detection Approaches Dennis Jensch, Daniel Mohr, Clausthal University Gabriel Zachmann, University of Bremen.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Face Detection Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
1/18 New Feature Presentation of Transition Probability Matrix for Image Tampering Detection Luyi Chen 1 Shilin Wang 2 Shenghong Li 1 Jianhua Li 1 1 Department.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Object Recognition by Integrating Multiple Image Segmentations Caroline Pantofaru, Cordelia Schmid, Martial Hebert ECCV 2008 E.
Wonjun Kim and Changick Kim, Member, IEEE
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Hybrid Classifiers for Object Classification with a Rich Background M. Osadchy, D. Keren, and B. Fadida-Specktor, ECCV 2012 Computer Vision and Video Analysis.
Object Recognition by Discriminative Combinations of Line Segments and Ellipses Alex Chia ^˚ Susanto Rahardja ^ Deepu Rajan ˚ Maylor Leung ˚ ^ Institute.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
Oisin Mac Aodha (UCL) Gabriel Brostow (UCL) Marc Pollefeys (ETH)
Face detection Many slides adapted from P. Viola.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
Bag-of-Visual-Words Based Feature Extraction
Efficient Image Classification on Vertically Decomposed Data
Improving the Performance of Fingerprint Classification
Compositional Human Pose Regression
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Enhanced-alignment Measure for Binary Foreground Map Evaluation
CSE 4705 Artificial Intelligence
Multiple Instance Learning: applications to computer vision
ADABOOST(Adaptative Boosting)
Satellite data Marco Puts
COLOR CONSTANCY IN THE COMPRESSED DOMAIN
Specularity, the Zeta-image, and Information-Theoretic Illuminant
Presentation transcript:

A Hybrid Strategy For Illuminant Estimation Targeting Hard Images Roshanak Zakizadeh 1 Michael Brown 2 Graham Finlayson 1 1 University of East Anglia 2 National University of Singapore Color & Photometry in Computer Vision Workshop ICCV2015 Santiago, Chile

Illuminant Estimation Images taken from NASA webpage:

Motivation for this work

White balance algorithms

Motivation for this work White balance algorithms White patch 1971 Grey world 1980 Corrected moment 2014 CNNs methods 2015 Gamut 1980

Motivation for this work White balance algorithms White patch 1971 Grey world 1980 Corrected moment 2014 CNNs methods 2015 Gamut 1980 Performance evaluation of algorithms

Motivation for this work Gamut Mean error (Branard et al. TIP02) Median error (Hordley & Finlayson JOSAA06) Trimean error (Gijsenij et al. JOSAA09, Gijsenij et al. TIP11) method’s performance across an entire dataset. White balance algorithms White patch 1971 Grey world 1980 Corrected moment 2014 CNNs methods 2015 Gamut 1980 Performance evaluation of algorithms

Motivation for this work Example from Cheng et al, CVPR 2015

Are these the same hard images? Our motivation....

Are these the same hard images? Our motivation.... analyze the set of images algorithms perform poorly on.

Are these the same hard images? Our motivation.... analyze the set of images algorithms perform poorly on. Do algorithms perform poorly on the same images?

Are these the same hard images? Our motivation.... analyze the set of images algorithms perform poorly on. Do algorithms perform poorly on the same images? If so, then these are “hard”.

Are these the same hard images? Our motivation.... analyze the set of images algorithms perform poorly on. Do algorithms perform poorly on the same images? If so, then these are “hard”. If “hard” images exist, we identify them?

Are these the same hard images? Our motivation.... analyze the set of images algorithms perform poorly on. Do algorithms perform poorly on the same images? If so, then these are “hard”. If “hard” images exist, we identify them? If we can identify them, can we tailor illumination estimation algorithms for them?

Illuminant Estimation Algorithms Statistical based algorithms  White-patch/MaxRGB (Land77)  Gray-world (Buchsbaum80)  Shades of gray (Finlayson&Trezzi04)  Edge-based color constancy (Weijer07)  Pixel brightness (Cheng14) Learning based algorithms  Gamut mapping (Forsyth99)  Natural image statistics (Gijsenij&Gever07)  Bayesian method (Gehler08)  General gamut mapping (Gijsenij09)  Spatial correlation (Chakrabarti12)  Exemplar-based (Vaezi Joze13)  Corrected moment (Finlayson14)  CNNs (Bianco15)  CNNs (Barron15) …. Fast simple algorithms. Can be used onboard camera. Better performance in many cases. Complicated and slower. Better used as an offline solution.

Performance of combinations of algorithms on Gehler-Shi dataset Statistical based algorithms  Shades of grayS1  Gray-world S2  Edge-based color constancy  1 st order grey edge S3  2 nd order grey edge S4  White-patch/MaxRGB S5 Learning based algorithms  Exemplar-basedL1  Natural image statisticsL2  General gamut mapping  Edge based L3  Pixel based L4  Intersection based L5  Bayesian methodL6  Spatial correlationL7

Performance of combinations of algorithms on Gehler-Shi dataset Statistical based algorithms  Shades of grayS1  Gray-world S2  Edge-based color constancy  1 st order grey edge S3  2 nd order grey edge S4  White-patch/MaxRGB S5 Learning based algorithms  Exemplar-basedL1  Natural image statisticsL2  General gamut mapping  Edge based L3  Pixel based L4  Intersection based L5  Bayesian methodL6  Spatial correlationL7

Performance of combinations of algorithms on 482 images of Gehler-Shi* dataset Combinations with most ‘hard’ imagesCombinations with least ‘hard’ images MethodsFailed images Time per image (min) MethodsFailed images Time per image (min) S2S3S5L3L7841.5L1L2L4L6L S2S3S5L3L6809.8S4L1L2L5L S2S3S5L3L4781.8S1S4L1L2L S2S3S4S5L3731S2L1L2L6L S1S2S3S4S S2S4L1L2L S1S2S4S5L4641.2S2S4L1L6L *Gehler et al. CVPR2008, Shi & Funt SFU2010

Performance of combinations of algorithms on 482 images of Gehler-Shi dataset Combinations with most ‘hard’ imagesCombinations with least ‘hard’ images MethodsFailed images Time per image (min) MethodsFailed images Time per image (min) S2S3S5L3L7841.5L1L2L4L6L S2S3S5L3L6809.8S4L1L2L5L S2S3S5L3L4781.8S1S4L1L2L S2S3S4S5L3731S2L1L2L6L S1S2S3S4S S2S4L1L2L S1S2S4S5L4641.2S2S4L1L6L

Performance of combinations of algorithms on 482 images of Gehler-Shi dataset Image ID S1 S4 L1 L2 L Method’s error is above threshold (i.e. method fails) Method’s error is below threshold (i.e. method succeeds) Easy image Each column represents an image in the dataset Methods

Performance of combinations of algorithms on 482 images of Gehler-Shi dataset Image ID S1 S4 L1 L2 L Method’s error is above threshold (i.e. method fails) Method’s error is below threshold (i.e. method succeeds) Easy image Hard image (3 or more fail) Hard image Easy image Each column represents an image in the dataset Methods

Performance of combinations of algorithms on 482 images of Gehler-Shi dataset Image ID S1 S4 L1 L2 L Method’s error is above threshold (i.e. method fails) Method’s error is below threshold (i.e. method succeeds) (all 5 succeed) Easy image Hard image (3 or more fail) Hard image Easy image Each column represents an image in the dataset Methods

Performance of combinations of algorithms on 482 images of Gehler-Shi dataset Combinations with most ‘hard’ imagesCombinations with least ‘hard’ images MethodsFailed images Time pi (min) MethodsFailed images Time pi (min) S2S3S5L3L7841.5L1L2L4L6L S2S3S5L3L6809.8S4L1L2L5L S2S3S5L3L4781.8S1S4L1L2L S2S3S4S5L3731S2L1L2L6L S1S2S3S4S S2S4L1L2L S1S2S4S5L4641.2S2S4L1L6L Statistical-based methods If 3 or more algorithms from this combination fail the image is labelled as hard.

Performance of Statistical based algorithms on hard images

Performance of learning based algorithms on hard images Exemplar-based method (Joze PAMI2014)

Hybrid method for targeting hard images feature selection

Centroid of five estimated illuminants Estimated illuminant with median angle from the centroid estimate

Hybrid method for targeting hard images feature selection FeatureOverall accuracyHard image accuracy Easy image accuracy Centroid (mean)93.6%85%96.6% Median from centroid86.7%68.3%94.3% Standard deviation (std)82%42.3%95.9% Centroid + std89.7 %68.1%95.9% Median + std85.4%59.2%94.7%

FeatureOverall accuracyHard image accuracy Easy image accuracy Centroid (mean)93.6%85%96.6% Median from centroid86.7%68.3%94.3% Standard deviation (std)82%42.3%95.9% Centroid + std89.7 %68.1%95.9% Median + std85.4%59.2%94.7% Hybrid method for targeting hard images feature selection

Training images Training Phase Hybrid method for targeting hard images overall procedure

Training images Ground-truth illuminants Calculate errors Easy image All 5 < threshold (t1) Hard image At least 3 > threshold (t2) Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Training Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Training Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Hard or Easy Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Training Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Hard or Easy Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Training Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Hard or Easy Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Feature extraction (the centroids) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Classification model Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Feature extraction (the centroids) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Classification model Hard image Easy image Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Feature extraction (the centroids) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Classification model Hard image Easy image Simple on-board camera algorithms can white- balance this image. Just simply average the estimations or apply a correction matrix. Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Feature extraction (the centroids) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Training images Feature extraction (mean of 5 estimated illuminants) Learn SVM classifier Classification model Hard image A learning-based method (e.g. exemplar-based) is required. Possibly as an off-line solution. Easy image Simple on-board camera algorithms can white- balance this image. Just simply average the estimations or apply a correction matrix. Test image Ground-truth illuminants Calculate errors Hard image Easy image At least 3 > threshold (t2) All 5 < threshold (t1) Feature extraction (the centroids) Hard or Easy Illuminant estimation Simple statistics algorithms Illuminant estimation Simple statistics algorithms: {S1,S2,S3,S4,S5} Test PhaseTraining Phase Hybrid method for targeting hard images overall procedure

Results of applying our hybrid method S1 (SoG)S2 (GW)S3 (1 st GE)S4 (2 nd GE)S5 (WP)L1Proposed averagecorrected All4.37°7.04°4.81°4.73°6.46°2.4° Easy3.5°6.9°4.26°4.7° 2.1°3.42°2.4° Hard6°7.04°6.1°4.8°12.9°2.91° Time (per image)3.4s1.8s6.8s8s1.85s1.96m21.9s + (1.96m per hard image) Time (total)18.5m9.8m36.9m43.5m10m10.7h4.5h The median errors of the proposed hybrid framework treating hard and easy images differently. In comparison we show the errors of fast statistical algorithms (S1 to S5), as well as time complexity of exemplar-based method

Results of applying our hybrid method S1 (SoG)S2 (GW)S3 (1 st GE)S4 (2 nd GE)S5 (WP)L1Proposed averagecorrected All4.37°7.04°4.81°4.73°6.46°2.4° Easy3.5°6.9°4.26°4.7° 2.1°3.42°2.4° Hard6°7.04°6.1°4.8°12.9°2.91° Time (per image)3.4s1.8s6.8s8s1.85s1.96m21.9s + (1.96m per hard image) Time (total)18.5m9.8m36.9m43.5m10m10.7h4.5h The median errors of the proposed hybrid framework treating hard and easy images differently. In comparison we show the errors of fast statistical algorithms (S1 to S5), as well as time complexity of exemplar-based method

Results of applying our hybrid method S1 (SoG)S2 (GW)S3 (1 st GE)S4 (2 nd GE)S5 (WP)L1Proposed averagecorrected All4.37°7.04°4.81°4.73°6.46°2.4° Easy3.5°6.9°4.26°4.7° 2.1°3.42°2.4° Hard6°7.04°6.1°4.8°12.9°2.91° Time (per image)3.4s1.8s6.8s8s1.85s1.96m21.9s + (1.96m per hard image) Time (total)18.5m9.8m36.9m43.5m10m10.7h4.5h The median errors of the proposed hybrid framework treating hard and easy images differently. In comparison we show the errors of fast statistical algorithms (S1 to S5), as well as time complexity of exemplar-based method

Results of applying our hybrid method S1 (SoG)S2 (GW)S3 (1 st GE)S4 (2 nd GE)S5 (WP)L1Proposed averagecorrected All4.37°7.04°4.81°4.73°6.46°2.4° Easy3.5°6.9°4.26°4.7° 2.1°3.42°2.4° Hard6°7.04°6.1°4.8°12.9°2.91° Time (per image)3.4s1.8s6.8s8s1.85s1.96m21.9s + (1.96m per hard image) Time (total)18.5m9.8m36.9m43.5m10m10.7h4.5h The median errors of the proposed hybrid framework treating hard and easy images differently. In comparison we show the errors of fast statistical algorithms (S1 to S5), as well as time complexity of exemplar-based method

Results of applying our hybrid method S1 (SoG)S2 (GW)S3 (1 st GE)S4 (2 nd GE)S5 (WP)L1Proposed averageCorrected* All4.37°7.04°4.81°4.73°6.46°2.4° Easy3.5°6.9°4.26°4.7° 2.1°3.42°2.4° Hard6°7.04°6.1°4.8°12.9°2.91° Time (per image)3.4s1.8s6.8s8s1.85s1.96m21.9s + (1.96m per hard image) Time (total)18.5m9.8m36.9m43.5m10m10.7h4.5h The median errors of the proposed hybrid framework treating hard and easy images differently. In comparison we show the errors of fast statistical algorithms (S1 to S5), as well as time complexity of exemplar-based method *Finlayson ICCV2014

Application

Removing false hard images from Gehler-Shi dataset

Thanks