Structure-measure: A New Way to Evaluate Foreground Maps

Slides:



Advertisements
Similar presentations
Exemplar-Based Segmentation of Pigmented Skin Lesions from Dermoscopy Images Mei Chen Intel Labs Pittsburgh Approach Motivation Skin.
Advertisements

LEARNING INFLUENCE PROBABILITIES IN SOCIAL NETWORKS Amit Goyal Francesco Bonchi Laks V. S. Lakshmanan University of British Columbia Yahoo! Research University.
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
A generic model to compose vision modules for holistic scene understanding Adarsh Kowdle *, Congcong Li *, Ashutosh Saxena, and Tsuhan Chen Cornell University,
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Qualifying Exam: Contour Grouping Vida Movahedi Supervisor: James Elder Supervisory Committee: Minas Spetsakis, Jeff Edmonds York University Summer 2009.
How to Evaluate Foreground Maps ?
Guillaume Lavoué Mohamed Chaker Larabi Libor Vasa Université de Lyon
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
A Closed Form Solution to Natural Image Matting
Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.
1 Blind Image Quality Assessment Based on Machine Learning 陈 欣
Background Removal of Multiview Images by Learning Shape Priors Yu-Pao Tsai, Cheng-Hung Ko, Yi-Ping Hung, and Zen-Chung Shih IEEE 2007.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
© 2013 IBM Corporation Efficient Multi-stage Image Classification for Mobile Sensing in Urban Environments Presented by Shashank Mujumdar IBM Research,
Speaker: Chi-Yu Hsu Advisor: Prof. Jian-Jung Ding Leveraging Stereopsis for Saliency Analysis, CVPR 2012.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Tag Clouds Revisited Date : 2011/12/12 Source : CIKM’11 Speaker : I- Chih Chiu Advisor : Dr. Koh. Jia-ling 1.
報告人:張景舜 P.H. Wu, C.C. Chen, J.J. Ding, C.Y. Hsu, and Y.W. Huang IEEE Transactions on Image Processing, Vol. 22, No. 9, September 2013 Salient Region Detection.
Feature-Based Stereo Matching Using Graph Cuts Gorkem Saygili, Laurens van der Maaten, Emile A. Hendriks ASCI Conference 2011.
BING: Binarized Normed Gradients for Objectness Estimation at 300fps
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
Stable Multi-Target Tracking in Real-Time Surveillance Video
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Department of computer science and engineering Evaluation of Two Principal Image Quality Assessment Models Martin Čadík, Pavel Slavík Czech Technical University.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
Wonjun Kim and Changick Kim, Member, IEEE
Here today. Gone Tomorrow Aaron McClennon-Sowchuk, Michail Greshischev.
Photo-Quality Enhancement based on Visual Aesthetics S. Bhattacharya*, R. Sukthankar**, M.Shah* *University of Central Florida, **Intel labs & CMU.
A computational model of stereoscopic 3D visual saliency School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Modeling Perspective Effects in Photographic Composition Zihan Zhou, Siqiong He, Jia Li, and James Z. Wang The Pennsylvania State University.
Blind Image Quality Assessment Using Joint Statistics of Gradient Magnitude and Laplacian Features 刘新 Xue W, Mou X, Zhang L, et al. Blind.
CNN-RNN: A Unified Framework for Multi-label Image Classification
Structure Similarity Index
Source: IEEE Signal Processing Letters (Accepted)2016
GrabCut Interactive Foreground Extraction using Iterated Graph Cuts Carsten Rother Vladimir Kolmogorov Andrew Blake Microsoft Research Cambridge-UK.
Deep Compositional Cross-modal Learning to Rank via Local-Global Alignment Xinyang Jiang, Fei Wu, Xi Li, Zhou Zhao, Weiming Lu, Siliang Tang, Yueting.
Krishna Kumar Singh, Yong Jae Lee University of California, Davis
Saliency-guided Video Classification via Adaptively weighted learning
Depth Map Upsampling by Self-Guided Residual Interpolation
Compositional Human Pose Regression
Saliency detection with background model
Cold-Start Heterogeneous-Device Wireless Localization
Fast Preprocessing for Robust Face Sketch Synthesis
Saliency detection Donghun Yeo CV Lab..
Synthesis of X-ray Projections via Deep Learning
Learning to Detect a Salient Object
Structure-measure: A New Way to Evaluate Foreground Maps
Enhanced-alignment Measure for Binary Foreground Map Evaluation
Fenglong Ma1, Jing Gao1, Qiuling Suo1
Text Detection in Images and Video
Rob Fergus Computer Vision
Bringing Salient Object Detection to the Foreground
Saliency detection Donghun Yeo CV Lab..
Enhanced-alignment Measure for Binary Foreground Map Evaluation
CornerNet: Detecting Objects as Paired Keypoints
Object Detection Creation from Scratch Samsung R&D Institute Ukraine
A Framework for Benchmarking Entity-Annotation Systems
Insert your topic name here
Related Work in Camera Network Tracking
边缘检测年度进展概述 Ming-Ming Cheng Media Computing Lab, Nankai University
Interpretation of results presented as heat maps and heat map results for the exercise subtests, ranked by the total Functional Movement Screen (FMS) score.
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
Feature Selective Anchor-Free Module for Single-Shot Object Detection
Yingze Wang and Shi-Kuo Chang University of Pittsburgh
INDOOR DENSE DEPTH MAP AT DRONE HOVERING
Robust Feature Matching and Fast GMS Solution
Learning to Navigate for Fine-grained Classification
Presentation transcript:

Structure-measure: A New Way to Evaluate Foreground Maps 范登平 南开大学计控学院 媒体计算实验室 dengpingfan@mail.nankai.edu.cn I’m Deng-Ping Fan ICCV 2017 (Spotlight) 2017/10/11

Content Current Measures Limitation Motivation Our new S-measure

What is the foreground map? models Foreground map (FM)

What is the foreground map? models Foreground map (FM)

Goal Similarity? Foreground map (FM) Our goal is to evaluate the similarity between the foreground map and the ground truth.

Pixel-wise measures (AP, AUC,Fbw[1]) %The problem is that the current popular measures including AP and AUC are pixel-wise based. They ignore the structure similarity, %So they rank the two different foreground map in the same order. This is contradictory to our common sense. Existing methods rely on pixel-wise measure and ignore important structure similarity, improperly resulting in same scores for these two foreground maps. (a) GT (b) FM1 (c) FM2 [1] How to Evaluate Foreground Maps, IEEE CVPR, 2014, Margolin R, et. al.

Motivation Region structure consistency of object-parts; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

Motivation Region structure consistency of object-parts; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

Motivation Region Object structure consistency uniformly distributed; of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

Motivation Region Object structure consistency uniformly distributed; of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

Motivation Region Object structure consistency uniformly distributed; of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

Region-Level 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 = 𝑗=1 4 𝑤 𝑗 ∗𝑠𝑠𝑖𝑚 𝑗 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 = 𝑗=1 4 𝑤 𝑗 ∗𝑠𝑠𝑖𝑚 𝑗 We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region. Image quality assessment: from error visibility to structural similarity , IEEE TIP 2004, Z Wang, AC Bovik et. al.

Object-Level: foreground FM For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term.

Object-Level: foreground GT foreground FM For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term.

Object-Level: foreground GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 2 + 𝑦 𝐹𝐺 2 2 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform

Object-Level: foreground GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 2 + 𝑦 𝐹𝐺 2 2 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast

Object-Level: foreground GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 2 + 𝑦 𝐹𝐺 2 2 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform

Object-Level: foreground GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 2 + 𝑦 𝐹𝐺 2 2 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform 𝑂 𝐹𝐺 = 1 𝐷 𝐹𝐺

Framework Here is our proposed framework including region and object similarity.

Framework Here is our proposed framework including region and object similarity.

Framework Here is our proposed framework including region and object similarity.

Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺

+ = Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 =

+ = Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 = 𝑆= 𝝰∗𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 +(1−𝝰)∗ 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Ranking example Our Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

Meta-Measure1 Agree with the application: Saliency Cut To test the quality of our measure, we utilized 4 meta-measures which are used to evaluate the quality of measure. A good evaluation measure rank result should consist with the application rank result.

Meta-Measure-2 Prefer a good result over an Generic result (a)Image (b)GT (c)FM1 (d)Generic The second meta-measure is that the measure should prefer a good result over an generic result. The third is that the evaluation measure should decrease the score when using the wrong ground-truth.

Meta-Measure-2 Meta-Measure-3 Prefer a good result over an Generic result (a)Image (b)GT (c)FM1 (d)Generic Meta-Measure-3 WRONG ground-truth decrease score The second meta-measure is that the measure should prefer a good result over an generic result. The third is that the evaluation measure should decrease the score when using the wrong ground-truth. (a)Image (b)FM (c)GT (d)WRONG GT

Results Results in ASD dataset. (a)Meta-measure1 (b)Meta-measure2 (c)Meta-measure3 We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

Results Results in ASD dataset. (a)Meta-measure1 (b)Meta-measure2 (c)Meta-measure3 Results in other popular datasets. We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

Our measure is better than current measures. Results Results in ASD dataset. Our measure is better than current measures. (a)Meta-measure1 (b)Meta-measure2 (c)Meta-measure3 Results in other popular datasets. We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

Meta-Measure 4 Agree with the human ranking. The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

Meta-Measure 4 Agree with the human ranking. The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

Meta-Measure 4 Agree with the human ranking. The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

Meta-Measure 4 Agree with the human ranking. Our VS. Fbw: 63.69% The final meta-measure is that the evaluation measure rank result should agree with the human ranking. Our VS. Fbw: 63.69% Our VS. AP : 72.11% Our VS. AUC: 73.56%

Ranking Results

Conclusion Simple(5.3ms) 5.4ms

Conclusion Simple(5.3ms) New insights (structure-wise)

Conclusion Simple(5.3ms) New insights (structure-wise) Unified evaluation to both binary and non-binary maps.

Thanks! http://dpfan.net/smeasure/ Free: C++/Matlab code Thank you for your attention.