Download presentation
Presentation is loading. Please wait.
Published byBrenda Little Modified over 8 years ago
1
The University of Ontario The University of Ontario 5-1 CS 4487/9587 Algorithms for Image Analysis Segmentation with Boundary Regularization Acknowledgements: many slides from the University of Manchester, demos from Visual Dynamics Group (University of Oxford),
2
The University of Ontario The University of Ontario 5-2 CS 4487/9587 Algorithms for Image Analysis Boundary Regularization n Objective functions for segmentation n “Intelligent scissors” (a.k.a. live-wire) contrast weighted graphs Dijkstra n Active Contours (a.k.a. “snakes”) gradient descent dynamic programming (DP), Viterbi algorithm DP versus Dijkstra Extra Reading: Sonka et.al 5.2.5 and 8.2 Active Contours by Blake and Isard
3
The University of Ontario The University of Ontario Intelligent Scissors (a.k.a. live-wire) [Eric Mortensen, William Barrett, 1995]
4
The University of Ontario The University of Ontario Intelligent Scissors n Approach answers a basic question Q: how to find a path from seed to mouse that follows object boundary as closely as possible? A: define a path that stays as close as possible to edges
5
The University of Ontario The University of Ontario Intelligent Scissors n Basic Idea find lowest cost path from seed to mouse on a graph (e.g. N 8 pixel grid) weighted by intensity contrast seed mouse NOTE: it is common to define weights directly on edges of the graph/grid and use “shortest path algorithm” (Dijkstra) simple example: some local measure of “contrast” using magnitude of intensity gradient Q: How to find such a low cost path?
6
The University of Ontario The University of Ontario Shortest Path Search (Dijkstra) n Computes minimum cost path from the seed to all other pixels (once all paths are pre-computed, each path can be instantly shown as mouse moves around) assume w(p,q) - directed edge cost between pixels p and q on 8-neighborhood ( N 8 ) graph Example: NOTE: diagonal edges are scaled by. (see next slide) NOTE: it is common to define weights directly on edges of the graph/grid and use “shortest path algorithm” (Dijkstra) Q: How to find such a low cost path?
7
The University of Ontario The University of Ontario Segmentation should be Invariant to Image Rotation After object rotation L L Path’s cost along the top boundaryPath’s cost along the top-left boundary After diagonal links are adjusted by 2 2 2 2 2 2 2 2 2 2 2 2 image gradient scores
8
The University of Ontario The University of Ontario Other ways to define edge costs n Graph node for every pixel p link between every adjacent pair of pixels e=(p,q) cost w(e)=w(p,q) for each link n Note: each link has a cost this is different from our first example where pixels (graph node) had contrast-based costs p q
9
The University of Ontario The University of Ontario Other ways to define edge costs n Want to hug image edges: how to define cost of a link from pixel intensities? p q the link should follow the intensity edge –want intensity to change rapidly to the link the cost of edge should be small when the difference of intensity to the link is large T T
10
The University of Ontario The University of Ontario Other ways to define edge costs p q n First, estimate image derivative in the direction orthogonal to the edge n Use finite differences (or kernels/filters) 1 11
11
The University of Ontario The University of Ontario Other ways to define edge costs p q n Second, use some penalty function g( ) assigning low penalty to large directional derivatives and large penalty to small derivatives
12
The University of Ontario The University of Ontario Other ways to define edge costs p q n Finally, cost of en edge e should be local contrast score adjusted by the edge length Why?
13
The University of Ontario The University of Ontario Other ways to define edge costs n When computing the shortest path we approximate a contour C minimizing continuous geometrically meaningful functional Cost of one edge Cost of a PATH Integral of contrast penalty along the contour
14
The University of Ontario The University of Ontario (see Cormen et.al. “Introduction to Algorithms”, p.595) Dijkstra’s shortest path algorithm 0 5 31 33 4 9 2 link cost 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) ALGORITHM 3 sets of nodes Free Active Done
15
The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm 4 10 5 3 323 9 5 31 33 4 9 2 11 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) ALGORITHM 3 sets of nodes Free Active Done
16
The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm 4 10 5 3 323 9 5 31 33 4 9 2 1 5 2 33 3 2 4 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done
17
The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm 3 10 5 3 323 6 5 31 33 4 9 2 4 31 4 5 2 33 3 2 4 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done
18
The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm 3 10 5 3 323 6 5 31 33 4 9 2 4 31 4 5 2 33 3 2 4 2 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done
19
The University of Ontario The University of Ontario Path Search (basic idea) A B Dijkstra algorithm - processed nodes (distance to A is known) - active nodes (front) - active node with the smallest distance value
20
The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm n Properties It computes the minimum cost path from the seed to every node in the graph. This set of minimum paths is represented as a tree Running time, with N pixels: –O(N 2 ) time if you use an active list –O(N log N) if you use an active priority queue (heap) –takes < second for a typical (640x480) image Once this tree is computed once, we can extract the optimal path from any point to the seed in O(N) time. –it runs in real time as the mouse moves What happens when the user specifies a new seed?
21
The University of Ontario The University of Ontario Livewire extensions n Directed graphs n Restricted search space Restricted domain (e.g. near a priori model) Restricted backward search n Different edge weight functions Image-Edge strength Image-Edge Curvature Proximity to known approximate model/boundary n Multi-resolution processing
22
The University of Ontario The University of Ontario Results http://www.cs.washington.edu/education/courses/455/03wi/projects/project1/artifacts/index.html
23
The University of Ontario The University of Ontario 5-23 “Live-wire” vs. “Snakes” intelligent scissors [Mortensen, Barrett 1995] live-wire [Falcao, Udupa, Samarasekera, Sharma 1998] 1 2 3 4 Shortest paths on image-based graph connect seeds placed on object boundary
24
The University of Ontario The University of Ontario 5-24 “Live-wire” vs. “Snakes” Given: initial contour (model) near desirable object Snakes, active contours [Kass, Witkin, Terzopoulos 1987] In general, deformable models are widely used
25
The University of Ontario The University of Ontario 5-25 “Live-wire” vs. “Snakes” Snakes, active contours [Kass, Witkin, Terzopoulos 1987] In general, deformable models are widely used Given: initial contour (model) near desirable object Goal: evolve the contour to fit exact object boundary
26
The University of Ontario The University of Ontario 5-26 Tracking via deformable models 1.Use final contour/model extracted at frame t as an initial solution for frame t+1 2.Evolve initial contour to fit exact object boundary at frame t+1 3.Repeat steps 1 and 2 for t ‘= t+1
27
The University of Ontario The University of Ontario 5-27 Tracking via deformable models Acknowledgements: Visual Dynamics Group, Dept. Engineering Science, University of Oxford.Visual Dynamics Group Traffic monitoring Human-computer interaction Animation Surveillance Computer Assisted Diagnosis in medical imaging Applications:
28
The University of Ontario The University of Ontario 5-28 Tracking via deformable models Tracking Heart Ventricles
29
The University of Ontario The University of Ontario 5-29 “Snakes” n A smooth 2D curve which matches to image data n Initialized near target, iteratively refined n Can restore missing data initial intermediate final Q: How does that work? optimization of snake’s quality function But first, need to know how to represent a snake…
30
The University of Ontario The University of Ontario 5-30 Parametric Curve Representation (continuous case) n A curve can be represented by 2 functions open curve closed curve Note: in computer vision and medical imaging the term “snake” is commonly associated with such parametric representation of contours. (Other representations will be discussed later!) Here, contour is a point in (space of functions) parameter
31
The University of Ontario The University of Ontario 5-31 Parametric Curve Representation (discrete case) n A curve can be represented by a set of 2D points Here, contour is a point in _ parameter
32
The University of Ontario The University of Ontario 5-32 Measuring snake’s quality: Energy function Contours can be seen as points C in (or in ) We can define some energy function E(C) that assigns some number (quality measure) to all possible snakes E(C) (scalars) (contours C) Q: Did we use any function (energy) to measure quality of segmentation results in 1) image thresholding? 2) region growing? 3) K-means 4) mean-shift 5) live-wire? NO YES NO YES WHY?: Somewhat philosophical question, but specifying a quality function E(C) is an objective way to define what “good” means for contours C. Moreover, one can find “the best” contour (segmentation) by optimizing energy E(C).
33
The University of Ontario The University of Ontario 5-33 Energy function Usually, the total energy of snake is a combination of internal and external energies Internal energy encourages smoothness or any particular shape Internal energy incorporates prior knowledge about object boundary allowing to extract boundary even if some image data is missing External energy encourages curve onto image structures (e.g. image edges)
34
The University of Ontario The University of Ontario 5-34 Internal Energy (continuous case) n The smoothness energy at contour point v(s) could be evaluated as Then, the interior energy (smoothness) of the whole snake is elasticity (stretching) bending no worries intuitive discrete version (next slide)
35
The University of Ontario The University of Ontario Internal Energy (discrete case) elastic energy (elasticity) bending energy
36
The University of Ontario The University of Ontario 5-36 Internal Energy (discrete case) Elasticity Stiffness
37
The University of Ontario The University of Ontario 5-37 External energy n The external energy describes how well the curve matches the image data locally n Numerous forms can be used, attracting the curve toward different image features
38
The University of Ontario The University of Ontario 5-38 External energy n Suppose we have an image I(x,y) n Can compute image gradient at any point n Edge strength at pixel (x,y) is discrete case continuous case n External energy term for the whole snake is n External energy of a contour point v=(x,y) could be
39
The University of Ontario The University of Ontario elastic smoothness term interior energy image data term exterior energy 5-39 Basic Elastic Snake n The total energy of a basic elastic snake is discrete case
40
The University of Ontario The University of Ontario 5-40 Basic Elastic Snake n The total energy of a basic elastic snake is discrete case elastic smoothness term interior energy image data term exterior energy C i i-1 i+1 i+2 L i-1 LiLi L i+1 This term can make a curve shrink to a point This term makes a curve stick to intensity edges
41
The University of Ontario The University of Ontario 5-41 Basic Elastic Snake n The problem is to find contour that minimizes n Optimization problem for function of 2n variables can compute local minima via gradient descent (coming soon) potentially more robust option: dynamic programming (later)
42
The University of Ontario The University of Ontario 5-42 Basic Elastic Snake Synthetic example (1) (2) (3) (4)
43
The University of Ontario The University of Ontario 5-43 Basic Elastic Snake Dealing with missing data n The smoothness constraint can deal with missing data:
44
The University of Ontario The University of Ontario 5-44 Basic Elastic Snake Relative weighting n Notice that the strength of the internal elastic component can be controlled by a parameter, n Larger α increases stiffness of curve large small medium
45
The University of Ontario The University of Ontario 5-45 Encouraging point spacing n To stop the curve from shrinking to a point encourages given point separation
46
The University of Ontario The University of Ontario 5-46 Simple shape prior n If object is a small variation on a known shape, use –where points define “prior” shape n Can also use a statistical (Gaussian) shape model Eucledian distance ||v-v|| ε < Mahalanobis distance ||v-v|| Σ <
47
The University of Ontario The University of Ontario 5-47 Interactive (external) forces Snakes originally developed for interactive segmentation Initial snake result can be nudged where it goes wrong. Simply add extra external energy terms to – pull nearby points toward cursor, or – push nearby points away from cursor
48
The University of Ontario The University of Ontario 5-48 Interactive (external) forces n Pull points towards cursor: Nearby points get pulled hardest Negative sign gives better energy for positions near p
49
The University of Ontario The University of Ontario 5-49 Interactive (external) forces n Push points from cursor: Nearby points get pushed hardest Positive sign gives better energy for positions far from p
50
The University of Ontario The University of Ontario 5-50 Dynamic snakes n Adding motion parameters as variables (for each snake node) n Introduce energy terms for motion consistency n primarily useful for tracking (nodes represent real tissue elements with mass and kinematic energy)
51
The University of Ontario The University of Ontario 5-51 Open vs. closed snakes n When using an open curve we can impose constraints on the end points (e.g. end points may have fixed position) Q: What is similar or different with the live-wire if the end points of an open snake are fixed? open curve closed curve assumes
52
The University of Ontario The University of Ontario 5-52 Optimization of snakes n At each iteration we compute a new snake position within proximity to the previous snake n New snake energy should be smaller than the previous one n Stop when the energy can not be decreased within local neighborhood of the snake (local energy minima) Optimization Methods 1. Gradient Descent 2. Dynamic Programming
53
The University of Ontario The University of Ontario 5-53 Toy example: local optimization for function of one (scalar) variable assume some energy function f(x) describing snake’s “quality” f(x) “derivative descent” for scalar functions local minima for f(x) Move from x i in the direction where the function decreases (left or right, depending of the sign of derivative f’ at x i )
54
The University of Ontario The University of Ontario 5-54 for functions of two or more variables: Gradient Descent - direction of (negative) gradient at point (x,y) is direction of the (steepest) descent towards lower values of function E n Example: minimization of functions of two variables - magnitude of gradient at (x,y) gives the value of the slope
55
The University of Ontario The University of Ontario 5-55 Gradient Descent n Example: minimization of functions of two variables Stop at a local minima where update equation for a point p=(x,y) BTW: mean-shift is example of gradient ascend towards modes (i.e. maxima) of data density (function) gradient descent are iterative moves towards function minima
56
The University of Ontario The University of Ontario 5-56 Gradient Descent n Example: minimization of functions of two variables sensitivity to initialisation !! In general, gradient descent uses the same equation for functions E(p) where p contains more than two variables, but it is harder to illustrate. YET: gradient descent for snakes can be nicely visualized by a “vector field”…
57
The University of Ontario The University of Ontario 5-57 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy C here, energy is a function of 2n variables C
58
The University of Ontario The University of Ontario updates can be written for each node 5-58 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy update equation for the whole snake C here, energy is a function of 2n variables C
59
The University of Ontario The University of Ontario 5-59 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy update equation for each node C here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field updates can be written for each node
60
The University of Ontario The University of Ontario 5-60 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy Q: Do points move independently? = ? update equation for each node C NO, motion of point i depends on positions of neighboring points here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field
61
The University of Ontario The University of Ontario 5-61 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy = ? update equation for each node C from exterior (image) energy from interior (smoothness) energy here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field
62
The University of Ontario The University of Ontario 5-62 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy = ? update equation for each node C motion of v i towards higher magnitude of image gradients motion of v i reducing contour’s bending This term for v i depends on neighbors v i-1 and v i+1 here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field optional slide
63
The University of Ontario The University of Ontario 5-63 “Gradient Flow” of snakes contour evolution via gradient flow C C’ Stopping criteria: local minima of energy E update equation for each node snake energy gradient can be visualized as a vector field
64
The University of Ontario The University of Ontario 5-64 Difficulties with Gradient Descent n Very difficult to obtain accurate estimates of high-order derivatives on images (due to noise) E.g., estimating requires computation of second image derivatives n Gradient descent is not trivial even for functions over R 1. Robust numerical performance in R 2n may be problematic. Choice of parameter is non-trivial –Small, the algorithm may be too slow –Large, the algorithm may never converge Even when “converged” to a good local minima, the snake oscillates near it
65
The University of Ontario The University of Ontario 5-65 Alternative solution for 2D snakes: Dynamic Programming (DP) n Basic elastic snake energy can be written as a sum of pair-wise interaction potentials n More generally, snake energy is a sum of higher-order interaction potentials (e.g. triple interactions).
66
The University of Ontario The University of Ontario 5-66 Snake energy: pair-wise interactions Example: basic elastic snake energy where Q: give an example of snake with triple-interaction potentials?
67
The University of Ontario The University of Ontario 5-67 DP Snakes control points Energy E is minimized via Dynamic Programming in locally restricted search space [Amini, Weymouth, Jain, 1990] First-order interactions
68
The University of Ontario The University of Ontario 5-68 DP Snakes control points [Amini, Weymouth, Jain, 1990] Iterate until optimal position at each point is in the box center, i.e. the current snake is optimal in the local search space First-order interactions Energy E is minimized via Dynamic Programming in locally restricted search space
69
The University of Ontario The University of Ontario 5-69 Dynamic Programming (DP) Viterbi Algorithm Complexity:, Worst case = Best Case Here we focus on first-order interactions states 1 2 … m sites (assume open snake!) - internal “energy counters” at node i and state k
70
The University of Ontario The University of Ontario 5-70 Dynamic Programming and Hidden Markov Models (HMM) n DP is widely used in speech recognition time audible signal word1 word2 word3 word4 ordered (in time) hidden variables (words) to be estimated from observed signal
71
The University of Ontario The University of Ontario 5-71 Snakes can also be seen as Hidden Markov Models (HMM) n Positions of snake nodes are hidden variables n Timely order is replaced with spatial order n Observed audible signal is replaced with image
72
The University of Ontario The University of Ontario 5-72 Dynamic Programming for a closed snake? Clearly, DP can be applied to optimize an open ended snake Can we use DP for a “looped” energy in case of a closed snake?
73
The University of Ontario The University of Ontario 5-73 Dynamic Programming for a closed snake 1.Can use Viterbi to optimize snake energy in case is fixed. (in this case the energy above effectively has no loop) 2.Use Viterbi to optimize snake for all possible values of c and choose the best of the obtained m solutions. for exact solution complexity increases to O(nm 3 )
74
The University of Ontario The University of Ontario 5-74 Dynamic Programming for a closed snake DP has problems with “loops” (even one loop increases complexity). However, some approximation tricks can be used in practice… 1.Use DP to optimize snake energy with fixed (according to a given initial snake position). 2.Use DP to optimize snake energy again. This time fix position of an intermediate node where is an optimal position obtained in step 1. This is only an approximation, but complexity is good: O(nm 2 )
75
The University of Ontario The University of Ontario 5-75 Dynamic Programming for snakes with higher order interactions (e.g. if bending energy is added into the “model” of the snake) Viterbi algorithm can be generalized to 3-clique case but its complexity increases to O((n-1)m 3 ). one approach: combine each pair of neighboring nodes into one super node (m 2 states). Each triple interaction can be represented as a pair-wise interaction between 2 super-nodes. Viterbi algorithm needs only m 3 operations for each super node (why?)
76
The University of Ontario The University of Ontario 5-76 DP snakes (open case) Summary of Complexity energy type complexity (order of interactions) unary potentials O(nm) (d=1) pair-wise potentials O((n-1)m 2 ) (d=2) triple potentials O((n-2)m 3 ) (d=3) complete connectivity O(m n ) – exhaustive search (d=n) * - adding a single loop increases complexity by factor m d-1 * *
77
The University of Ontario The University of Ontario 5-77 Problems with snakes n May be sensitive to initialization –may get stuck in a local energy minimum near initial contour n Numerical stability can be an issue for gradient descent E.g. requires computing second order derivatives n The general concept of snakes (deformable models) does generalize to 3D (deformable mesh), but some robust optimization methods suitable for 2D snakes do not apply in 3D E.g.: dynamic programming only works for 2D snakes
78
The University of Ontario The University of Ontario 5-78 Problems with snakes n Depends on number and spacing of control points n Not trivial to prevent curve self intersecting n Can not follow topological changes of objects more examples in the next slides
79
The University of Ontario The University of Ontario 6-79 Cremers, Tischhäuser, Weickert, Schnörr, “Diffusion Snakes”, IJCV '02 [a slide borrowed from Daniel Cremers] Problems with snakes
80
The University of Ontario The University of Ontario 6-80 Cremers, Tischhäuser, Weickert, Schnörr, “Diffusion Snakes”, IJCV '02 [a slide borrowed from Daniel Cremers] Problems with snakes
81
The University of Ontario The University of Ontario 6-81 Fixed topology requires heuristic splitting mechanisms Insufficient resolution / control point density requires control point regridding mechanisms [a slide borrowed from Daniel Cremers] Problems with snakes
82
The University of Ontario The University of Ontario 5-82 Problems with snakes n External energy: may need to diffuse image gradients, otherwise the snake does not really “see” object boundaries in the image unless it gets very close to it. image gradients are large only directly on the boundary
83
The University of Ontario The University of Ontario 5-83 Diffusing Image Gradients image gradients diffused via Gradient Vector Flow (GVF) Chenyang Xu and Jerry Prince, 98 http://iacl.ece.jhu.edu/projects/gvf/
84
The University of Ontario The University of Ontario 5-84 Alternative Way to Improve External Energy n Use instead of where D( ) is Distance Transform (for detected binary image features, e.g. edges) binary image features (edges) Distance Transform Distance Transform can be visualized as a gray- scale image Generalized Distance Transform (directly for image gradients)
85
The University of Ontario The University of Ontario 5-85 Distance Transform (see p.20-21 of the text book) 3 4 2 3 2 3 544 2 2 3 1 1 2 21121 100121 0 0 0 1 232101 1012332 1 0 1 1 1 01 2 10123432 1 0 1 2 2 Distance Transform Image features (2D) Distance Transform is a function that for each image pixel p assigns a non-negative number corresponding to distance from p to the nearest feature in the image I
86
The University of Ontario The University of Ontario 5-86 Distance Transform can be very efficiently computed
87
The University of Ontario The University of Ontario 5-87 Distance Transform can be very efficiently computed
88
The University of Ontario The University of Ontario Distance Transform can be very efficiently computed 5-88 Forward-Backward pass algorithm computes shortest paths in O(n) on a grid graph with regular 4-N connectivity and homogeneous edge weights 1 Alternatively, Dijkstra’s algorithm can also compute a distance map (trivial generalization for multiple sources), but it would take O(n*log(n)). - Dijkstra is slower but it is a more general method applicable to arbitrary weighted graphs
89
The University of Ontario The University of Ontario 5-89 Distance Transform: an alternative way to think about n Assuming then is standard Distance Transform (of image features) Locations of binary image features
90
The University of Ontario The University of Ontario 5-90 Distance Transform vs. Generalized Distance Transform n For general is called Generalized Distance Transform of F(p) may represent non-binary image features (e.g. image intensity gradient) D(p) may prefer “strength” of F(p) to proximity qp
91
The University of Ontario The University of Ontario 5-91 Generalized Distance Transforms (see Felzenszwalb and Huttenlocher, IJCV 2005) n The same “Forward-Backward” algorithm can be applied to any initial array Binary ( ) initial values are non-essential. n If the initial array contains values of function F(x,y) then the output of the “Forward-Backward” algorithm is a Generalized Distance Transform n “Scope of attraction” of image gradients can be extended via external energy based on a generalized distance transform of
92
The University of Ontario The University of Ontario 5-92 Metric properties of discrete Distance Transforms -1 10 01 1- Forward mask Backward mask Manhattan (L1) metric Set of equidistant points Metric 1.4 1 10 0 1 1 Better approximation of Euclidean metric In fact, “exact” Euclidean Distance transform can be computed fairly efficiently (in linear or near-linear time) without bigger masks 1) www.cs.cornell.edu/~dph/matchalgs/www.cs.cornell.edu/~dph/matchalgs/ 2) Fast Marching Method –Tsitsiklis, Sethian Euclidean (L2) metric
93
The University of Ontario The University of Ontario 5-93 Summary n Boundary regularization live-wire snakes n Optimization is not trivial gradient descent Dynamic Programming (DP) –second (and higher-) order energies –no loops (e.g. can be done on trees) n Generalized distance maps Next topic: combining color and boundary
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.