Download presentation
Presentation is loading. Please wait.
Published byIsabella Sanchez Modified over 11 years ago
1
Image Segmentation with Level Sets Group reading 22-04-05
2
Outline 1.Active Contours 2.Level sets in Malladi 1995 3.Prior knowledge in Leventon 2000
3
1. Active Contours
4
1.1 Introduction to AC 1.2Types of curves 1.3Types of influences 1.4Summary
5
1.1 Introduction to AC AC = curve fitted iteratively to an image –based on its shape and the image values –until it stabilises (ideally on an objects boundary). curve:polygon = parametric AC continuous = geometric AC parametric geometric
6
1.1 Introduction to AC Example: cell nucleus Initialisation: external Fitting: iterative –Using the current geometry (Contour influence) and the image contents (Image influence) –Influence: scalar = energy vector= force –Until the energy is minimum, or the forces are null Image: Ian Diblik 2004
7
1.2 Types of curves Parametric AC: –Stored as vertices –Each vertex is moved iteratively Geometric AC: –Stored as coefficients –Sampled before each iteration –Each sample is moved –New coefficients are computed (interpolation)
8
1.3 Types of influences Each vertex /sample v i is moved: in a direction that minimises its energy: E(v i ) = E contour (v i ) + E image (v i ) or, along the direction of the force: F(v i ) = F contour (v i ) + F image (v i ) ( Linear combinations of influences )
9
1.3 Types of influences Contour influence: –meant to control the geometry locally around v i –corresponds to the prior knowledge on shape Energy: example: E contour (v i ) =. ||v i || 2 +. ||v i || 2 Force: example: F contour (v i ) = v i + v i + n i where n i is the normal of the contour at v i and is a constant pressure (balloon)
10
1.3 Types of influences Image influence: –meant to drive the contour locally –corresponds to the prior knowledge on the object boundaries Energy: example: E image (v i ) = ( I (v i ) - I target ) 2 E image (v i ) =. 1 / ( 1 + || I (v i ) || ) Malladi 1995 E image (v i ) = -. || I (v i ) || Leventon 2000 Force: example: F image (v i ) = E image (v i ) n i F image (v i ) = - I (v i )
11
1.3 Types of influences Exotic influences: E contour (v i ) proportional to the deformations of a template: E image (v i ) based on region statistics: I outside I inside Bredno et al. 2000 Hamarneh et al. 2001
12
1.4 Summary Active Contours: –function v : N P ( [ 0, I width ] x [ 0, I height ] ) t { (x,y) } –v(0) is provided externally –v(t+1) is computed by moving sample points v i of v(t) –moved using a linear combination of influences –on convergence, v(t end ) is the border of the object Issues: –fixed topology –possible self-crossing
13
2. Level sets in Malladi 1995
14
2.1Introduction to LS 2.2Equation of motion 2.3Algorithm 1 2.3Narrow-band extension 2.4Summary
15
2.1 Introduction to LS Transition from Active Contours: –contour v(t) front (t) –contour energy forces F A F C –image energy speed function k I Level set: –The level set c 0 at time t of a function (x,y,t) is the set of arguments { (x,y), (x,y,t) = c 0 } –Idea: define a function (x,y,t) so that at any time, (t) = { (x,y), (x,y,t) = 0 } there are many such has many other level sets, more or less parallel to only has a meaning for segmentation, not any other level set of
16
2.1 Introduction to LS Usual choice for : signed distance to the front (0) - d(x,y, ) if (x,y) inside the front (x,y,0) = 0 on d(x,y, ) outside 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 -3 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2 3 3 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 (t) (x,y,t) 0 -2 5
17
2.1 Introduction to LS 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 -3 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2 3 3 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 (x,y,t) (x,y,t+1) = (x,y,t) + (x,y,t) 1 1 1 1 1 1 1 1 1 1 1 1 1 111 1 111 1 1 1 1 1 11 11 1 -2 -3 -2 -3 2 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 222 0 0 0 0 0 0 0 0 0 0 0 0 00 0 000 0 0 0 0 0 0 00 00 0 2 2 2 2 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 no movement, only change of values the front may change its topology the front location may be between samples
18
2.1 Introduction to LS Segmentation with LS: Initialise the front (0) Compute (x,y,0) Iterate: (x,y,t+1) = (x,y,t) + (x,y,t) until convergence Mark the front (t end )
19
2.2 Equation of motion Equation 17 p162: link between spatial and temporal derivatives, but not the same type of motion as contours! constant force (balloon pressure) (x,y,t+1) - (x,y,t) extension of the speed function k I (image influence) smoothing force depending on the local curvature (contour influence) spatial derivative of product of influences
20
2.2 Equation of motion Speed function: –k I is meant to stop the front on the objects boundaries –similar to image energy: k I (x,y) = 1 / ( 1 + || I (x,y) || ) –only makes sense for the front (level set 0) –yet, same equation for all level sets extend k I to all level sets, defining –possible extension: ^ kIkI ^ k I (x,y) = k I (x,y) where (x,y) is the point in the front closest to (x,y) ^ ( such a k I (x,y) depends on the front location )
21
2.3 Algorithm 1 (p163) 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 -3 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2 3 3 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 1 1 1 1 1 1 1 1 1 1 1 1 1 111 1 111 1 1 1 1 1 11 11 1 -2 -3 -2 -3 2 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 222 0 0 0 0 0 0 0 0 0 0 0 0 00 0 000 0 0 0 0 0 0 00 00 0 2 2 2 2 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 1. compute the speed k I on the front extend it to all other level sets 2. compute (x,y,t+1) = (x,y,t) + (x,y,t) 3. find the front location (for next iteration) modify (x,y,t+1) by linear interpolation 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 -3 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2 3 3 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 0 0 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 (x,y,t)
22
2.4 Narrow-band extension Weaknesses of algorithm 1 –update of all (x,y,t): inefficient, only care about the front –speed extension: computationally expensive Improvement: –narrow band: only update a few level sets around –other extended speed: k I (x,y) = 1 / ( 1 + || I(x,y)|| ) ^ 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2
23
2.4 Narrow-band extension Caution: –extrapolate the curvature at the edges –re-select the narrow band regularly: an empty pixel cannot get a value may restrict the evolution of the front 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2
24
2.5 Summary Level sets: –function : [ 0, I width ] x [ 0, I height ] x N R ( x, y, t ) (x,y,t) –embed a curve : (t) = { (x,y), (x,y,t) = 0 } – (0) is provided externally, (x,y,0) is computed – (x,y,t+1) is computed by changing the values of (x,y,t) –changes using a product of influences –on convergence, (t end ) is the border of the object Issue: –computation time (improved with narrow band)
25
3. Prior knowledge in Leventon 2000
26
3.1Introduction 3.2Level set model 3.3Learning prior shapes 3.4Using prior knowledge 3.5Summary
27
3.1 Introduction New notations: function (x,y,t) shape u(t) ( still a function of (x,y,t) ) front (t) parametric curve C(q) (= level set 0 of u(t) ) extended speed k I stopping function g(I) Malladis model: choose: F A = -c (constant value) F G ( ) = - (penalises high curvatures), ^
28
3.1 Introduction u(t) use a level set model (based on current geometry and local image values): u(t+1) = u(t) + u(t) use prior knowledge and global image values to find the prior shape which is closest to u(t): call it u*(t) combine linearly these two new shapes to find the next iteration: u(t+1) = u(t) + 1 u(t) + 2 ( u*(t) - u(t) )
29
3.2 Level set model u(t) use a level set model (based on current geometry and local image values): u(t+1) = u(t) + u(t) use prior knowledge and global image values to find the prior shape which is closest to u(t): call it u*(t) combine linearly these two new shapes to find the next iteration: u(t+1) = u(t) + 1 u(t) + 2 ( u*(t) - u(t) )
30
3.2 Level set model not Malladis model: its Caselles model: use it in: u(t+1) = u(t) + 1 u(t) + 2 ( u*(t) - u(t)) Thats all about level sets. Now its all about u*. ( implicit geometric active contour ) ( implicit geodesic active contour )
31
3.3 Learning prior shapes From functions to vectors: u(t) (continuous) 0 0 0 0000 00 0 0 0 0 0 0 0 0 000 0 0 000 0 -2 -3 1 1 1 1 1 1 1 1 111 1 1 111 1 1 1 1 1 1 1111 11 1 1 2 2 2 2 2 2 2 22 2 222 2 2 2 2 2 2 2 2 22222 2 2 2 3 3 3 3 3 3 3 3 3 33 333 3 3 3 3 3 3 34 4 4 4 4 4 4 4444 4 4 4 4 4 4 7 5 5 5 5 55 5 5 5 5 6 6 66 sample it: 1 1 1 2 2 2 2 2. 3 3 3 3 4. 4 4 4 4 7 5 5 5. 5 5 5 5 6 6 6 6 put the values in one column = vector u(t) (discrete)
32
3.3 Learning prior shapes Storing the training set: –set of n images manually segmented: { C i, 1 i n } –compute the embedding functions : T = { u i, 1 i n } –compute the mean and offsets of T: = 1/n i u i û i = u i - –build the matrix M of offsets: M = ( û 1 û 2 û n ) –build the covariance matrix 1/n MM T, decompose as: 1/n MM T = U U T U orthogonal, columns: orthogonal modes of shape variations diagonal, values: corresponding singular values
33
3.3 Learning prior shapes Reducing the number of dimensions: –u: dimension N d (number of samples, e.g. I width x I height ) –keep only first k columns of U: U k (most significant modes) –represent any shape u as: = U k T ( u - ) = shape parameter of u, dimension k only –restore a shape from : u = U k + –other parameter defining a shape u: pose p (position) –prior shape: u* (, p) ~
34
3.3 Learning prior shapes At the end of the training: most common variation least common variation The most common shapes have a higher probability.
35
3.4 Using prior knowledge u(t) use a level set model (based on current geometry and local image values): u(t+1) = u(t) + u(t) use prior knowledge and global image values to find the prior shape which is closest to u(t): call it u*(t) combine linearly these two new shapes to find the next iteration: u(t+1) = u(t) + 1 u(t) + 2 ( u*(t) - u(t) )
36
3.4 Using prior knowledge Finding the prior shape u* closest to u(t) –Given the current shape u(t) and the image I, we want: u* map = ( map, p map ) = arg max P(, p | u(t), I) –Bayes rule: P(,p | u,I) = P(u |,p). P(I |u,,p). P( ). P(p) / P(u,I) has to be max = P (u | u*) = exp( - V outside ) compares u and u* = P (I | u, u*) P ( I | u*) = exp ( - | h(u*) - | I| | 2 ) compares u* with the image features measures how likely u* is as a prior shape = U(–,) uniform distribution of poses (no prior knowledge) constant
37
3.4 Using prior knowledge P( u | u* ) : P(u | u*) = exp ( - V outside ), V outside : volume of u outside of u* u* u V outside = Card { (x,y), u(x,y) < u*(x,y) } – meant to choose the prior shape u* most similar to u – only useful when C(q) gets close to the final boundaries
38
3.4 Using prior knowledge P(u(t) | u* 1 ) =1 P(u(t) | u* 2 ) =1 P(u(t) | u* 3 ) =1 u* 1 u* 2 u* 3 P(u(t) | u* 2 ) =0.6 P(u(t) | u* 2 ) =0 P(u(t) | u* 3 ) =0.6 P(u(t) | u* 1 ) =1 P(u(t) | u* 3 ) =1 u(t)
39
I(x,y) 3.4 Using prior knowledge P( I | u, u* ) : –meant to choose the prior shape u* closest to the image –depends on the boundary features (image influence) P( I | u*) = exp ( - | h(u*) - | I| | 2 ) u*=0 u* I 0-22 u*=2 u*=-2 h(u*)
40
3.4 Using prior knowledge u* 1 u* 2 P(I|u* 1 ) = 0.9 P(I|u* 2 ) = 0.6 u(t) u(t) + 2 (u* 1 - u(t))
41
3.5 Summary Training: –compute mean and covariance of training set –reduce dimension Segmentation: –initialise C, compute u(0) with signed distance –to find u(t+1) from u(t), compute: a variation u(t) using current shape and local image values a prior shape u* so that: u* is a likely prior shape u* encloses u(t) u* is close to the object boundaries –combine u(t) and u* to get u(t+1)
42
Conclusions Active contours: –use image values and a target geometry –topology: fixed but little robust Level sets: –also use image values and a target geometry –but in higher dimension, with a different motion –topology: flexible and robust Prior knowledge: –prior shapes influence the segmentation –not specific to level sets –ideas can be adapted to other features (boundaries, pose)
43
Thats all folks!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.