A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts Dhillon, Inderjit S., Yuqiang Guan, and Brian Kulis
K means and Kernel K means
Weighted Kernel k means Matrix Form
Spectral Methods
Represented with Matrix L for Ncut Ratio assoc Ratio cut Norm assoc
Weighted Graph Cut Weighted association Weighted cut
Conclusion Spectral Methods are special case of Kernel K means
Solve the uniformed problem A standard result in linear algebra states that if we relax the trace maximizations such that Y is an arbitrary orthonormal matrix, then the optimal Y is of the form V k Q, where V k consists of the leading k eigenvectors of W 1/2 KW 1/2 and Q is an arbitrary k × k orthogonal matrix. As these eigenvectors are not indicator vectors, we must then perform postprocessing on the eigenvectors to obtain a discrete clustering of the point
From Eigen Vector to Cluster Indicator Normalized U with L 2 norm equal to 1 2 1
The Other Way Using k means to solve the graph cut problem: (random start points+ EM, local optimal). To make sure k mean converge, the kernel matrix must be positive definite. This is not true for arbitrary kernel matrix
The effect of the regularization a i is in a i is not in
Experiment results
Results (ratio association)
Results (normalized association)
Image Segmentation
Thank you. Any Question?