Download presentation
Presentation is loading. Please wait.
1
A Probabilistic Framework For Segmentation And Tracking Of Multiple Non Rigid Objects For Video Surveillance Aleksandar Ivanovic, Tomas S. Huang ICIP 2004
2
Outline Introduction Content segmentation –Pixel probability model –Foreground probability model Object tracking method Object detection Experimental result
3
Introduction In video surveillance, reliable segmentation of moving objects is essential for successful event recognition. Tracking non-rigid objects presents several difficulties such as handling occlusion, disjoint objects and object detection. Park and Aggarwal proposed that the segmentation can be done on pixel, blob and object level.
4
Pixel Probability Model Use Lu*v* space Use a single Gaussian model the color distribution of each pixel p(x, y) at image coordinate (x, y) Use Mahalanobis distance M b (x, y) for background segmentation
5
Foreground Probability Model F(x, y) : foreground label A(x, y) : feature vector –A(x, y) = [M b (x, y), D(x, y), P h (x, y))] –D(x, y) : absolute distance D(x, y)= |R(x, y) – R mean (x, y)| + |G(x, y) – G mean (x, y)| + |B(x, y) – B mean (x, y)| –P h (x, y) : color similarity measure Bayesian Network (BN) Modeling
6
Foreground Probability Model (cont.) P(A|F=0), P(A|F=1) : –Use Gaussian mixture model Gaussian mixture model : – v = [H, S, V] T, a random variable
7
Blob Formation Foreground pixels with the same color are labeled as being in the same class. Connected component analysis is used to relabel the disjoint blobs. –Adjacency criterion –Color similarity criterion –Small blob criterion –Skin blob criterion : especially for human model
8
Connected Components Matching Connected components –4-connected components –8-connected components Tracking objects by matching the connected components to the foreground objects in the previous frame. –One-to-one match –Many-to-one match –One-to-Many match
9
Connected Components Matching (cont.) (f(i), c(j)) : (foreground, connected component) k(t) : feature vector describing f(i), c(j) –k(t) = [x s (t), y s (t), S(t), H(t), x c (t), y c (t)] x s (t), y s (t) : horizontal and vertical size of bounding box S(t) : size in pixel H(t) : color histogram of object/connected component x c (t), y c (t) : centroid of pixels of an object/connected component m(i, j) : information for matching f(i) to c(j) –m(i, j) = [SC(i, j), ED(i, j), HS(i, j), XC(i, j), YC(i, j)] SC(i, j) : size change, S(j) / S(i) ED(i, j) : Euclidean distance between (x c (i), y c (i)) and (x c (j), y c (j)) HS(i, j) : similarity between H(i) and H(j) XC(i, j), YC(i, j) : x s (j) / x s (i), y s (j) / y s (i)
10
Probability Model Use BN model matching from foreground object to connected components. M : match label (M = 1 if matched)
11
Probability Model (cont.) Case: occlusion group objects into one Case: similar to background match several objects at the same time
12
Object Detection The connected components not matched to any foreground object are considered to be new objects. Calculate the size of candidate –Doesn’t work very well with small objects Define feature T = [S, LC, SH, CS]
13
Experimental Results d, g: segmented objects only background model e, h: segmented objects using P f of foreground b: probability based only background model c: P f of foreground
14
Color Similarity Measure P h (x, y) For all tracked objects: No. of pixels in bin that contains p(x, y) P h (x, y) = ───────────────────────── No. of pixels in color histogram
15
BN Model for Object Detection S : size of the connected component LC : distance to the nearest location of an appearance of a foreground object SH : simple shape feature frequently used CS : color similarity
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.