Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning for Adaptive Bilateral Filtering I. Frosio 1, K. Egiazarian 1,2, and K. Pulli 1,3 1 NVIDIA, USA 2 Tampere University of Technology, Finland.

Similar presentations


Presentation on theme: "Machine Learning for Adaptive Bilateral Filtering I. Frosio 1, K. Egiazarian 1,2, and K. Pulli 1,3 1 NVIDIA, USA 2 Tampere University of Technology, Finland."— Presentation transcript:

1 Machine Learning for Adaptive Bilateral Filtering I. Frosio 1, K. Egiazarian 1,2, and K. Pulli 1,3 1 NVIDIA, USA 2 Tampere University of Technology, Finland 3 Light, USA

2 Motivation (denoising) void denoise(float *img){ … for (int y = 0; y < ys; y++){ for (int x = 0; x < xs; x++){ img(y*xs+x) = … } … }

3 Motivation: (Gaussian filter) t(x)d(x)

4 Bilateral filter Motivation (bilateral filter) C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, ICCV, 1998. t(x)d(x)

5 Motivation: (choice of the parameters) dd dd rr d(x)

6 Motivation (use intuition?) σ d scales with resolution σ r scaled with grey level dynamics possible automatic design of parameter values [BF, Tomasi and Manduchi, ICCV, 1998] image noise std σ n σ d = [1.5, 2.1], independently from σ n σ r = k·σ n [BF, Zhang, Gunturk, TIP, 2008] local signal variance σ s 2 (x,y) σ d = [1.5, 2.1], independently from σ n σ r (x,y) = σ n 2 /  σ s (x,y) [ABF, Qi et al., AMR, 2013] dd dd rr

7 Motivation (use machine learning!) dd rr

8 Framework (adaptive denoising) 3 features              6 unknowns  d (x,y)  r (x,y)

9 Framework (learning) Training images {t j } j=1..N Noise model (AWGN) Local image features, f x,y Image quality model (PSNR) Adaptive bilateral filter, 

10 Entropy features FlatGradientTextureEdges 0.0 bit6.0 bits1.0 bit5.6 bits Shannon’s entropy i(x,y)

11 Entropy features FlatGradientTextureEdges eiei 0.0 bit6.0 bits1.0 bit5.6 bits egeg 0.0 bit 1.2 bit5.5 bits i(x,y) ||grad[i(x,y)]||

12 Entropy features eiei egeg

13 Framework (complete) Training images {t j } j=1..N Noise model (AWGN) Local image features f x,y Image quality model (PSNR) Adaptive bilateral filter,   d (x,y) Logistic functions Local image features f x,y  r (x,y) Noisy image EABF Filtered image

14 Results - PSNR BFBF [Zhang]ABF [Qi]EABF  d (x,y) optimal1.8 our  r (x,y) optimal 2n2n  n 2 /(0.3  s ) our  n = 5 36.1336.063636.27  n = 10 31.4531.431.4431.1  n = 20 27.1127.0927.3626.4  n = 30 25.072525.3224.68  n = 40 23.9423.6924.1123.76  n = 5 36.2936.0435.9236.4  n = 10 32.5332.1732.0332.81  n = 20 28.9628.4828.7529.51  n = 30 26.9626.3126.9827.62  n = 40 25.6324.7225.6826.17  n = 5 36.536.0735.9336.54  n = 10 32.6432.2332.1532.78  n = 20 29.3228.8129.1729.65  n = 30 27.7126.8427.6828.01  n = 40 26.6625.3326.5326.82 BFBF [Zhang]ABF [Qi]EABF  d (x,y) optimal1.8 our  r (x,y) optimal 2n2n  n 2 /(0.3  s ) our  n = 5 37.6937.537.237.81  n = 10 3433.7633.6634.37  n = 20 30.3129.6430.1731.11  n = 30 28.227.1228.229.24  n = 40 26.8625.4626.8327.78  n = 5 38.1737.8637.6438.45  n = 10 34.6434.0934.1835.17  n = 20 31.230.0231.0532.08  n = 30 29.3727.7829.2430.21  n = 40 28.225.9727.8328.77  n = 5 37.8137.7437.3737.86  n = 10 34.7534.3134.2234.98  n = 20 31.2730.431.2231.8  n = 30 29.1427.8529.329.75  n = 40 27.7925.9527.7128.3

15 Results - PSNR BF BF [Zhang] ABF [Qi] EABF  d (x,y) optimal1.8 our  r (x,y) optimal 2n2n  n 2 /(0.3  s ) our  n = 5 37.136.8836.6837.22  n = 10 33.3332.9932.9533.53  n = 20 29.6929.0729.6230.09  n = 30 27.7426.8227.7928.25  n = 40 26.5125.1926.4426.93 average  n = 5… 40 30.8830.1930.731.21 +1.01dB +0.51dB

16 Results – Image quality Ground truth Noisy 20.11 dB BF [Zhang] 30.02 dB ABF [Qi] 31.05 dB EABF 32.08 dB

17 Machine learning vs. intuition:  d (x,y),  r (x,y)  n = 20 BF [Zhang et al.]ABF [Qi et al.]EABF  d (x,y) 1.8  r (x,y) 2s n [0.6, 2.6] [71, 85] [20, 110]

18 Machine learning vs. intuition:  d =  d (  n ) Flat areaEdge area

19 Machine learning vs. intuition:  r =  r (  n ) Flat areaEdge area

20 Conclusion Learning optimal parameter modulation strategies through Machine Learning is feasible. Modulation strategies are complicated… … But effective. PSNR 

21 Conclusion Training images {t j } j=1..N Noise model (AWGN) Local image features, f x,y Image quality model (PSNR) Adaptive bilateral filter, 

22 Conclusion Your training images Noise model (AWGN) Local image features, f x,y Image quality model (PSNR) Adaptive bilateral filter, 

23 Conclusion Your training images Your noise model Local image features, f x,y Image quality model (PSNR) Adaptive bilateral filter, 

24 Conclusion Your training images Your noise model Your local features, f x,y Image quality model (PSNR) Adaptive bilateral filter, 

25 Conclusion Your training images Your noise model Your local features, f x,y Your image quality model, Q Adaptive bilateral filter, 

26 Conclusion Your training images Your noise model Your local features, f x,y Your image quality model, Q A different adaptive filter, 

27 Conclusion A general FRAMEWORK based on MACHINE LEARNING for the development of ADAPTIVE FILTERS


Download ppt "Machine Learning for Adaptive Bilateral Filtering I. Frosio 1, K. Egiazarian 1,2, and K. Pulli 1,3 1 NVIDIA, USA 2 Tampere University of Technology, Finland."

Similar presentations


Ads by Google