Presentation is loading. Please wait.

Presentation is loading. Please wait.

Structure-measure: A New Way to Evaluate Foreground Maps

Similar presentations


Presentation on theme: "Structure-measure: A New Way to Evaluate Foreground Maps"— Presentation transcript:

1 Structure-measure: A New Way to Evaluate Foreground Maps
范登平 南开大学计控学院 媒体计算实验室 I’m Deng-Ping Fan ICCV 2017 (Spotlight) /10/11

2 Content Current Measures Limitation Motivation Our new S-measure

3 What is the foreground map?
models Foreground map (FM)

4 What is the foreground map?
models Foreground map (FM)

5 Goal Similarity? Foreground map (FM)
Our goal is to evaluate the similarity between the foreground map and the ground truth.

6 Pixel-wise measures (AP, AUC,Fbw[1])
%The problem is that the current popular measures including AP and AUC are pixel-wise based. They ignore the structure similarity, %So they rank the two different foreground map in the same order. This is contradictory to our common sense. Existing methods rely on pixel-wise measure and ignore important structure similarity, improperly resulting in same scores for these two foreground maps. (a) GT (b) FM1 (c) FM2 [1] How to Evaluate Foreground Maps, IEEE CVPR, 2014, Margolin R, et. al.

7 Motivation Region structure consistency of object-parts;
We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

8 Motivation Region structure consistency of object-parts;
We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

9 Motivation Region Object structure consistency uniformly distributed;
of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

10 Motivation Region Object structure consistency uniformly distributed;
of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

11 Motivation Region Object structure consistency uniformly distributed;
of object-parts; Object uniformly distributed; contrast sharply; We propose to evaluate structure similarity in both region level and object level. Our measure prefers structure consistency of object parts and uniformly distributed objects contrast sharply to background.

12 Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

13 Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

14 Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

15 Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

16 Region-Level We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region.

17 Region-Level 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 = 𝑗=1 4 𝑤 𝑗 ∗𝑠𝑠𝑖𝑚 𝑗
𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 = 𝑗=1 4 𝑤 𝑗 ∗𝑠𝑠𝑖𝑚 𝑗 We divide the image into parts and use famous ssim metric to evaluate the structure similarity for each region. Image quality assessment: from error visibility to structural similarity , IEEE TIP 2004, Z Wang, AC Bovik et. al.

18 Object-Level: foreground
FM For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term.

19 Object-Level: foreground
GT foreground FM For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term.

20 Object-Level: foreground
GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 𝑦 𝐹𝐺 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform

21 Object-Level: foreground
GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 𝑦 𝐹𝐺 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast

22 Object-Level: foreground
GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 𝑦 𝐹𝐺 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform

23 Object-Level: foreground
GT foreground FM 𝐷 𝐹𝐺 = 𝑥 𝐹𝐺 𝑦 𝐹𝐺 𝑥 𝐹𝐺 𝑦 𝐹𝐺 +𝝀∗ 𝝈 𝑥 𝐹𝐺 𝑥 𝐹𝐺 For Object-Level, we evaluate the foreground and background similarity, respectively. Foreground parts of ground truth and corresponding predicted map are compared in a holistic way, considering both contrast and uniform term. contrast uniform 𝑂 𝐹𝐺 = 1 𝐷 𝐹𝐺

24 Framework Here is our proposed framework including region and object similarity.

25 Framework Here is our proposed framework including region and object similarity.

26 Framework Here is our proposed framework including region and object similarity.

27 Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

28 Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

29 Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 Here is our proposed framework including region and object similarity.

30 Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺
Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺

31 + = Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺
Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 =

32 + = Framework 𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺
Here is our proposed framework including region and object similarity. 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡 =𝑢∗ 𝑂 𝐹𝐺 +𝑣∗ 𝑂 𝐵𝐺 = 𝑆= 𝝰∗𝑆 𝑟𝑒𝑔𝑖𝑜𝑛 +(1−𝝰)∗ 𝑆 𝑜𝑏𝑗𝑒𝑐𝑡

33 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

34 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

35 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

36 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

37 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

38 Ranking example Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

39 Ranking example Our Here is a realistic example, each row shows ranking results of AP measure, AUC measure and our results. Our results consider important structure similarity, resulting in preferable results in real applications.

40 Meta-Measure1 Agree with the application: Saliency Cut
To test the quality of our measure, we utilized 4 meta-measures which are used to evaluate the quality of measure. A good evaluation measure rank result should consist with the application rank result.

41 Meta-Measure-2 Prefer a good result over an Generic result (a)Image
(b)GT (c)FM1 (d)Generic The second meta-measure is that the measure should prefer a good result over an generic result. The third is that the evaluation measure should decrease the score when using the wrong ground-truth.

42 Meta-Measure-2 Meta-Measure-3
Prefer a good result over an Generic result (a)Image (b)GT (c)FM1 (d)Generic Meta-Measure-3 WRONG ground-truth decrease score The second meta-measure is that the measure should prefer a good result over an generic result. The third is that the evaluation measure should decrease the score when using the wrong ground-truth. (a)Image (b)FM (c)GT (d)WRONG GT

43 Results Results in ASD dataset. (a)Meta-measure1 (b)Meta-measure2
(c)Meta-measure3 We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

44 Results Results in ASD dataset. (a)Meta-measure1 (b)Meta-measure2
(c)Meta-measure3 Results in other popular datasets. We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

45 Our measure is better than current measures.
Results Results in ASD dataset. Our measure is better than current measures. (a)Meta-measure1 (b)Meta-measure2 (c)Meta-measure3 Results in other popular datasets. We do the experiment in ASD dataset and other 4 datasets, the results shown that our measure is better than others with a large gap.

46 Meta-Measure 4 Agree with the human ranking.
The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

47 Meta-Measure 4 Agree with the human ranking.
The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

48 Meta-Measure 4 Agree with the human ranking.
The final meta-measure is that the evaluation measure rank result should agree with the human ranking.

49 Meta-Measure 4 Agree with the human ranking. Our VS. Fbw: 63.69%
The final meta-measure is that the evaluation measure rank result should agree with the human ranking. Our VS. Fbw: % Our VS. AP : % Our VS. AUC: %

50 Ranking Results

51 Conclusion Simple(5.3ms) 5.4ms

52 Conclusion Simple(5.3ms) New insights (structure-wise)

53 Conclusion Simple(5.3ms) New insights (structure-wise)
Unified evaluation to both binary and non-binary maps.

54 Thanks! http://dpfan.net/smeasure/ Free: C++/Matlab code
Thank you for your attention.


Download ppt "Structure-measure: A New Way to Evaluate Foreground Maps"

Similar presentations


Ads by Google