Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid Objects using Mean Shift
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 2
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 3
Introduction The proposed tracking is appropriate for a large variety of objects with different color/texture patterns. The proposed tracking is appropriate for a large variety of objects with different color/texture patterns. The mean shift iterations are employed to find the target candidate that is the most similar to a given target model, with the similarity being expressed by a metric based on the Bhattacharyya coefficient The mean shift iterations are employed to find the target candidate that is the most similar to a given target model, with the similarity being expressed by a metric based on the Bhattacharyya coefficient 4
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 5
Sample Mean Shift
Kernel Density Estimation Multivariate kernel density estimation Multivariate kernel density estimation Kernels Kernels –Gaussian –Epanechnikov
Kernel Density Estimation(2) In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function (PDF)of a random variable In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function (PDF)of a random variable Kernel: In non-parametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions Kernel: In non-parametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions 8
Nonparametric statistics Nonparametric statistics are statistics not based on parameterized families of probability distributions Nonparametric statistics are statistics not based on parameterized families of probability distributions 9
Non-parametric models(1) A histogram is a simple nonparametric estimate of a probability distribution. A histogram is a simple nonparametric estimate of a probability distribution. Kernel density estimation provides better estimates of the density than histograms. Kernel density estimation provides better estimates of the density than histograms.
Non-parametric models(2) Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel
Histogram VS Kernel Density Estimators using these 6 data points: x1 = −2.1, x2 = −1.3, x3 = −0.4, x4 = 1.9, x5 = 5.1, x6 = bins each of width 2(left Histogram) normal kernel with variance 2.25 (indicated by the red dashed lines)(right KDE)
Kernel and Kernel Profile(1) The kernel density estimator The kernel density estimator A special class of radially symmetric kernels A special class of radially symmetric kernels where c k makes K(x) integrate to 1. where c k makes K(x) integrate to 1. k(·) is called profile k(·) is called profile
Kernel and Kernel Profile(2) We can use profile to describe the estimator We can use profile to describe the estimator
Derivative Kernel and Profile Define the derivative of the kernel profile Define the derivative of the kernel profile The kernel corresponding to g(x) is The kernel corresponding to g(x) is The kernel K(x) is called the shadow of G(x) The kernel K(x) is called the shadow of G(x)
Density Gradient Estimation Let’s compute the gradient of the kernel estimate Let’s compute the gradient of the kernel estimate
Mean-Shift Vector(1) Look at this Look at this which is the kernel density estimator using kernel g(·) So we have I In other words,
Mean-Shift Vector(2) We can treat ∇ pK (x)/pG (x) as a normalized density gradient estimate We can treat ∇ pK (x)/pG (x) as a normalized density gradient estimate Local mean − → large density change Local mean − → large density change Shift
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 19
Non-Rigid Object Tracking … …
Current frame …… Mean-Shift Object Tracking General Framework: Target Representation Choose a feature space Represent the model in the chosen feature space Choose a reference model in the current frame
Mean-Shift Object Tracking General Framework: Target Localization Search in the model’s neighborhood in next frame Start from the position of the model in the current frame Find best candidate by maximizing a similarity func. Repeat the same process in the next pair of frames Current frame …… ModelCandidate
Mean-Shift Object Tracking Target Representation Choose a reference target model Quantized Color Space Choose a feature space Represent the model by its PDF in the feature space Kernel Based Object Tracking, by Comaniniu, Ramesh, Meer
Mean-Shift Object Tracking Finding the PDF of the target model Target pixel locations A differentiable, isotropic, convex, monotonically decreasing kernel Peripheral pixels are affected by occlusion and background interference Normalization factor Pixel weight Probability of feature u in model Probability of feature u in candidate Normalization factor Pixel weight 0 model y candidate
Mean-Shift Object Tracking Similarity Function Target model: Target candidate: Similarity function: 1 1 The Bhattacharyya Coefficient
Mean-Shift Object Tracking Target Localization Algorithm Start from the position of the model in the current frame Search in the model’s neighborhood in next frame Find best candidate by maximizing a similarity func.
Linear approx. (around y 0 ) Mean-Shift Object Tracking Approximating the Similarity Function Model location: Candidate location: Independent of y Density estimate! (as a function of y) Bhattacharyya coefficient
Mean-Shift Object Tracking Maximizing the Similarity Function The mode of = sought maximum Important Assumption: One mode in the searched neighborhood The target representation provides sufficient discrimination
Mean-Shift Object Tracking Applying Mean-Shift Original Mean-Shift: Find mode ofusing The mode of = sought maximum Extended Mean-Shift: Find mode of using
Mean-Shift Object Tracking Bhattacharyya Coefficient Maximization Algorithm
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 31
Mean-Shift Object Tracking Results Feature space: RGB space with 32 32 32 bins Target: manually selected on 1 st frame Average mean-shift iterations: frames of 352 X 240 pixels
Mean-Shift Object Tracking Results Partial occlusion(#105) Distraction(#140) Motion blur(#150)
Outline Introduction Introduction Mean Shift Analysis Mean Shift Analysis Tracking Algorithm Tracking Algorithm Experiments Experiments Conclusion Conclusion 34
Conclusion By exploiting the spatial gradient of the statistical measure (Bhattacharyya Coefficient) the new method achieves real-time tracking performance, while effectively rejecting background clutter and partial occlusions. By exploiting the spatial gradient of the statistical measure (Bhattacharyya Coefficient) the new method achieves real-time tracking performance, while effectively rejecting background clutter and partial occlusions. 35