Presentation is loading. Please wait.

Presentation is loading. Please wait.

Secrets of GrabCut and Kernel K-means

Similar presentations


Presentation on theme: "Secrets of GrabCut and Kernel K-means"— Presentation transcript:

1 Secrets of GrabCut and Kernel K-means
Meng Tang1 Ismail Ben Ayed2 Dmitrii Marin1 Yuri Boykov1 1University of Western Ontario, Canada University of Quebec, Canada GrabCut as Color Clustering Probabilistic K-means vs Kernel K-means Our Energy Formulation & Optimization Initialization Basic K-means (e.g. Chan-Vese) Variance Criterion Kernel K-means plus smoothness regularization: Pairwise clustering term MRF regular. GMMs fitting more complex data representation probability models solution space S St St+1 At(S) E(S) E(St) E(St+1) bound optimization for E(S) unary bound for kernel k-means[4] minimize bound with graph cuts Probabilistic K-means (e.g. Zhu&Yuille, GrabCut[1]) kernel K-means[4] (ours) 3D color space descriptive GMMs over-fit the data sensitive to local minima log-likelihoods with parametric probability Model parameters θ often jointly optimized Experiments of GrabCut and our KKM Kernel give implicit feature Model-fitting & Kernel K-means Low dimension: RGB only GrabCut KKM aKKM Box and GT example θ : descriptive models (histogram or GMM) example k : normalized Gaussian (fixed small width) GrabCut KKM A B C Smoothness map Entropy Criterion Gini Criterion H(S) - entropy for intensities in S G(S) – Gini impurity for intensities in S (a) initialization (b) K-means (c) Elliptic K-means intensity histogram Solution A Solution B Secrets of Kernel K-means: the bias of optimal Gini clustering (see Breiman [3], for histograms) Figure 1: Robustness to regularization weight Figure 2: sample results on GrabCut dataset (d) Histogram fitting (e) GMM: local mini. (f) GMM: from G.T. bias to small & dense cluster observed for average association in [2]. We extend Breiman’s analysis [3] to continuous case common adaptive kernels (e.g. KNN) eliminate such bias (see paper) High dimension: RGBD (depth) depth map GrabCut Our aKKM solution A and B are equally good when using histograms local minima issue using GMMs (g) K-mode (mean-shift) (h) Our Kernel K-means model-fitting: local min., binning kernel K-means: non-linear separa. Figure 4: Scalability to higher dimension on RGB+wD Figure 3: Average error on GrabCut dataset High dimension feature : RGBXYM (location XY, motion M) Proposed Kernel K-means “World Map” of Segmentation Fixed Gaussian Kernel pKM Probabilistic K-means (ML model fitting) kKM kernel K-means (average distortion AD or association AA) basic K-means p.s.d. kernel weak kernel clustering (unary) Hilbertian distortion Normalized Cuts K-modes (mean-shift) GMM fitting histogram fitting geometric fitting gamma fitting Gaussian kernel K-means Spectral clustering distortion Gibbs fitting Average Cut Gini Bias Adaptive Kernel good clustering (a)Video frames (b)Optical flow (c)Our aKKM(RGBXY) (d)Our aKKM(RGBXYM) [1] C. Rother, V. Kolmogorov and A. Blake, GrabCut: Interactive Foreground Extraction using Iterated Graph Cuts, in SIGGRAPH 2004. [2] J. Shi, and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 22: , 2000. [3] L. Breiman. Technical note: Some properties of splitting criteria. Machine Learning, 24(1):41-47, 1996. [4] I. Dhillon, Y. Guan, and B. Kulis. Kernel K-means, spectral clustering and normalized cuts. In KDD, 2004.


Download ppt "Secrets of GrabCut and Kernel K-means"

Similar presentations


Ads by Google