The University of Ontario The University of Ontario 5-1 CS 4487/9587 Algorithms for Image Analysis Segmentation with Boundary Regularization Acknowledgements:

Slides:



Advertisements
Similar presentations
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Advertisements

TP11 - Fitting: Deformable contours Computer Vision, FCUP, Miguel Coimbra Slides by Prof. Kristen Grauman.
Fitting: Deformable contours
MRI Brain Extraction using a Graph Cut based Active Contour Model Noha Youssry El-Zehiry Noha Youssry El-Zehiry and Adel S. Elmaghraby Computer Engineering.
Active Contours, Level Sets, and Image Segmentation
The University of Ontario CS 4487/9587 Algorithms for Image Analysis Segmentation (2D) Acknowledgements: Alexei Efros, Steven Seitz.
CS 4487/9587 Algorithms for Image Analysis
Image Segmentation some examples Zhiqiang wang
Image Segmentation and Active Contour
Active Contour Models (Snakes) 건국대학교 전산수학과 김 창 호.
Tuesday, Oct 7 Kristen Grauman UT-Austin
Active Contours (SNAKES) Back to boundary detection –This time using perceptual grouping. This is non-parametric –We’re not looking for a contour of a.
Snakes with Some Math.
1 Minimum Ratio Contours For Meshes Andrew Clements Hao Zhang gruvi graphics + usability + visualization.
On Constrained Optimization Approach To Object Segmentation Chia Han, Xun Wang, Feng Gao, Zhigang Peng, Xiaokun Li, Lei He, William Wee Artificial Intelligence.
Snakes : Active Contour models
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Snakes - Active Contour Lecturer: Hagit Hel-Or
Active Contour Models (Snakes)
Deformable Contours Dr. E. Ribeiro.
Local or Global Minima: Flexible Dual-Front Active Contours Hua Li Anthony Yezzi.
EE663 Image Processing Edge Detection 5 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Segmentation Divide the image into segments. Each segment:
Today: Image Segmentation Image Segmentation Techniques Snakes Scissors Graph Cuts Mean Shift Wednesday (2/28) Texture analysis and synthesis Multiple.
Snakes Goes from edges to boundaries. Edge is strong change in intensity. Boundary is boundary of an object. –Smooth (more or less) –Closed. –…
Image Segmentation Today’s Readings Intelligent Scissors, Mortensen et. al, SIGGRAPH 1995Intelligent Scissors From Sandlot ScienceSandlot Science.
Active Contour Models (Snakes) Yujun Guo.
Image Segmentation Today’s Readings Forsyth & Ponce, Chapter 14
Announcements Mailing list: –you should have received messages Office hours onlineonline –start next week.
Instructor: Dr. Peyman Milanfar
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Fitting: Deformable contours Tuesday, September 24 th 2013 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides have been.
06 - Boundary Models Overview Edge Tracking Active Contours Conclusion.
Deformable Models Segmentation methods until now (no knowledge of shape: Thresholding Edge based Region based Deformable models Knowledge of the shape.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Graph-based Segmentation. Main Ideas Convert image into a graph Vertices for the pixels Vertices for the pixels Edges between the pixels Edges between.
7.1. Mean Shift Segmentation Idea of mean shift:
The University of Ontario CS 433/653 Algorithms for Image Analysis Segmentation (2D) Acknowledgements: Alexei Efros, Steven Seitz.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006.
Lecture 4: Image Resampling and Reconstruction CS4670: Computer Vision Kavita Bala.
EECS 274 Computer Vision Segmentation by Clustering II.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection:
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Dynamic Programming (DP), Shortest Paths (SP)
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
October 3, 2013Computer Vision Lecture 10: Contour Fitting 1 Edge Relaxation Typically, this technique works on crack edges: pixelpixelpixel pixelpixelpixelebg.
Graph-based Segmentation
TP11 - Fitting: Deformable contours
Interest Points EE/CSE 576 Linda Shapiro.
Announcements CS accounts Project 1 is out today
CS4670/5670: Image Scissors Noah Snavely Today’s Readings
Mean Shift Segmentation
Fitting Curve Models to Edges
Outline Perceptual organization, grouping, and segmentation
Hidden Markov Models Part 2: Algorithms
Announcements Reader is in the BOOK STORE (not Comm Bldg.)
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Snakes, Shapes, and Gradient Vector Flow
Announcements Project 1 is out today
EE 492 ENGINEERING PROJECT
Announcements Project 1 is out today help session at the end of class.
Active Contour Models.
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

The University of Ontario The University of Ontario 5-1 CS 4487/9587 Algorithms for Image Analysis Segmentation with Boundary Regularization Acknowledgements: many slides from the University of Manchester, demos from Visual Dynamics Group (University of Oxford),

The University of Ontario The University of Ontario 5-2 CS 4487/9587 Algorithms for Image Analysis Boundary Regularization n Objective functions for segmentation n “Intelligent scissors” (a.k.a. live-wire) contrast weighted graphs Dijkstra n Active Contours (a.k.a. “snakes”) gradient descent dynamic programming (DP), Viterbi algorithm DP versus Dijkstra Extra Reading: Sonka et.al and 8.2 Active Contours by Blake and Isard

The University of Ontario The University of Ontario Intelligent Scissors (a.k.a. live-wire) [Eric Mortensen, William Barrett, 1995]

The University of Ontario The University of Ontario Intelligent Scissors n Approach answers a basic question Q: how to find a path from seed to mouse that follows object boundary as closely as possible? A: define a path that stays as close as possible to edges

The University of Ontario The University of Ontario Intelligent Scissors n Basic Idea find lowest cost path from seed to mouse on a graph (e.g. N 8 pixel grid) weighted by intensity contrast seed mouse NOTE: it is common to define weights directly on edges of the graph/grid and use “shortest path algorithm” (Dijkstra) simple example: some local measure of “contrast” using magnitude of intensity gradient Q: How to find such a low cost path?

The University of Ontario The University of Ontario Shortest Path Search (Dijkstra) n Computes minimum cost path from the seed to all other pixels (once all paths are pre-computed, each path can be instantly shown as mouse moves around) assume w(p,q) - directed edge cost between pixels p and q on 8-neighborhood ( N 8 ) graph Example: NOTE: diagonal edges are scaled by. (see next slide) NOTE: it is common to define weights directly on edges of the graph/grid and use “shortest path algorithm” (Dijkstra) Q: How to find such a low cost path?

The University of Ontario The University of Ontario Segmentation should be Invariant to Image Rotation After object rotation L L Path’s cost along the top boundaryPath’s cost along the top-left boundary After diagonal links are adjusted by image gradient scores

The University of Ontario The University of Ontario Other ways to define edge costs n Graph node for every pixel p link between every adjacent pair of pixels e=(p,q) cost w(e)=w(p,q) for each link n Note: each link has a cost this is different from our first example where pixels (graph node) had contrast-based costs p q

The University of Ontario The University of Ontario Other ways to define edge costs n Want to hug image edges: how to define cost of a link from pixel intensities? p q the link should follow the intensity edge –want intensity to change rapidly to the link the cost of edge should be small when the difference of intensity to the link is large T T

The University of Ontario The University of Ontario Other ways to define edge costs p q n First, estimate image derivative in the direction orthogonal to the edge n Use finite differences (or kernels/filters) 1 11

The University of Ontario The University of Ontario Other ways to define edge costs p q n Second, use some penalty function g( ) assigning low penalty to large directional derivatives and large penalty to small derivatives

The University of Ontario The University of Ontario Other ways to define edge costs p q n Finally, cost of en edge e should be local contrast score adjusted by the edge length Why?

The University of Ontario The University of Ontario Other ways to define edge costs n When computing the shortest path we approximate a contour C minimizing continuous geometrically meaningful functional Cost of one edge Cost of a PATH Integral of contrast penalty along the contour

The University of Ontario The University of Ontario (see Cormen et.al. “Introduction to Algorithms”, p.595) Dijkstra’s shortest path algorithm link cost 1.init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) ALGORITHM 3 sets of nodes Free Active Done

The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) ALGORITHM 3 sets of nodes Free Active Done

The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done

The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done

The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm init node costs (distances) to , set p = seed point, cost(p) = 0 2.expand p as follows: for each of p’s neighbors q that are not expanded set cost(q) = min( cost(p) + c pq, cost(q) ) –if q’s cost changed, make q point back to p put q on the ACTIVE list (if not already there) 3.set r = node with minimum cost on the ACTIVE list 4.repeat Step 2 for p = r ALGORITHM 3 sets of nodes Free Active Done

The University of Ontario The University of Ontario Path Search (basic idea) A B Dijkstra algorithm - processed nodes (distance to A is known) - active nodes (front) - active node with the smallest distance value

The University of Ontario The University of Ontario Dijkstra’s shortest path algorithm n Properties It computes the minimum cost path from the seed to every node in the graph. This set of minimum paths is represented as a tree Running time, with N pixels: –O(N 2 ) time if you use an active list –O(N log N) if you use an active priority queue (heap) –takes < second for a typical (640x480) image Once this tree is computed once, we can extract the optimal path from any point to the seed in O(N) time. –it runs in real time as the mouse moves What happens when the user specifies a new seed?

The University of Ontario The University of Ontario Livewire extensions n Directed graphs n Restricted search space Restricted domain (e.g. near a priori model) Restricted backward search n Different edge weight functions Image-Edge strength Image-Edge Curvature Proximity to known approximate model/boundary n Multi-resolution processing

The University of Ontario The University of Ontario Results

The University of Ontario The University of Ontario 5-23 “Live-wire” vs. “Snakes” intelligent scissors [Mortensen, Barrett 1995] live-wire [Falcao, Udupa, Samarasekera, Sharma 1998] Shortest paths on image-based graph connect seeds placed on object boundary

The University of Ontario The University of Ontario 5-24 “Live-wire” vs. “Snakes” Given: initial contour (model) near desirable object Snakes, active contours [Kass, Witkin, Terzopoulos 1987] In general, deformable models are widely used

The University of Ontario The University of Ontario 5-25 “Live-wire” vs. “Snakes” Snakes, active contours [Kass, Witkin, Terzopoulos 1987] In general, deformable models are widely used Given: initial contour (model) near desirable object Goal: evolve the contour to fit exact object boundary

The University of Ontario The University of Ontario 5-26 Tracking via deformable models 1.Use final contour/model extracted at frame t as an initial solution for frame t+1 2.Evolve initial contour to fit exact object boundary at frame t+1 3.Repeat steps 1 and 2 for t ‘= t+1

The University of Ontario The University of Ontario 5-27 Tracking via deformable models Acknowledgements: Visual Dynamics Group, Dept. Engineering Science, University of Oxford.Visual Dynamics Group Traffic monitoring Human-computer interaction Animation Surveillance Computer Assisted Diagnosis in medical imaging Applications:

The University of Ontario The University of Ontario 5-28 Tracking via deformable models Tracking Heart Ventricles

The University of Ontario The University of Ontario 5-29 “Snakes” n A smooth 2D curve which matches to image data n Initialized near target, iteratively refined n Can restore missing data initial intermediate final Q: How does that work? optimization of snake’s quality function But first, need to know how to represent a snake…

The University of Ontario The University of Ontario 5-30 Parametric Curve Representation (continuous case) n A curve can be represented by 2 functions open curve closed curve Note: in computer vision and medical imaging the term “snake” is commonly associated with such parametric representation of contours. (Other representations will be discussed later!) Here, contour is a point in (space of functions) parameter

The University of Ontario The University of Ontario 5-31 Parametric Curve Representation (discrete case) n A curve can be represented by a set of 2D points Here, contour is a point in _ parameter

The University of Ontario The University of Ontario 5-32 Measuring snake’s quality: Energy function Contours can be seen as points C in (or in ) We can define some energy function E(C) that assigns some number (quality measure) to all possible snakes E(C) (scalars) (contours C) Q: Did we use any function (energy) to measure quality of segmentation results in 1) image thresholding? 2) region growing? 3) K-means 4) mean-shift 5) live-wire? NO YES NO YES WHY?: Somewhat philosophical question, but specifying a quality function E(C) is an objective way to define what “good” means for contours C. Moreover, one can find “the best” contour (segmentation) by optimizing energy E(C).

The University of Ontario The University of Ontario 5-33 Energy function Usually, the total energy of snake is a combination of internal and external energies Internal energy encourages smoothness or any particular shape Internal energy incorporates prior knowledge about object boundary allowing to extract boundary even if some image data is missing External energy encourages curve onto image structures (e.g. image edges)

The University of Ontario The University of Ontario 5-34 Internal Energy (continuous case) n The smoothness energy at contour point v(s) could be evaluated as Then, the interior energy (smoothness) of the whole snake is elasticity (stretching) bending no worries intuitive discrete version (next slide)

The University of Ontario The University of Ontario Internal Energy (discrete case) elastic energy (elasticity) bending energy

The University of Ontario The University of Ontario 5-36 Internal Energy (discrete case) Elasticity Stiffness

The University of Ontario The University of Ontario 5-37 External energy n The external energy describes how well the curve matches the image data locally n Numerous forms can be used, attracting the curve toward different image features

The University of Ontario The University of Ontario 5-38 External energy n Suppose we have an image I(x,y) n Can compute image gradient at any point n Edge strength at pixel (x,y) is discrete case continuous case n External energy term for the whole snake is n External energy of a contour point v=(x,y) could be

The University of Ontario The University of Ontario elastic smoothness term interior energy image data term exterior energy 5-39 Basic Elastic Snake n The total energy of a basic elastic snake is discrete case

The University of Ontario The University of Ontario 5-40 Basic Elastic Snake n The total energy of a basic elastic snake is discrete case elastic smoothness term interior energy image data term exterior energy C i i-1 i+1 i+2 L i-1 LiLi L i+1 This term can make a curve shrink to a point This term makes a curve stick to intensity edges

The University of Ontario The University of Ontario 5-41 Basic Elastic Snake n The problem is to find contour that minimizes n Optimization problem for function of 2n variables can compute local minima via gradient descent (coming soon) potentially more robust option: dynamic programming (later)

The University of Ontario The University of Ontario 5-42 Basic Elastic Snake Synthetic example (1) (2) (3) (4)

The University of Ontario The University of Ontario 5-43 Basic Elastic Snake Dealing with missing data n The smoothness constraint can deal with missing data:

The University of Ontario The University of Ontario 5-44 Basic Elastic Snake Relative weighting n Notice that the strength of the internal elastic component can be controlled by a parameter, n Larger α increases stiffness of curve large small medium

The University of Ontario The University of Ontario 5-45 Encouraging point spacing n To stop the curve from shrinking to a point encourages given point separation

The University of Ontario The University of Ontario 5-46 Simple shape prior n If object is a small variation on a known shape, use –where points define “prior” shape n Can also use a statistical (Gaussian) shape model Eucledian distance ||v-v|| ε < Mahalanobis distance ||v-v|| Σ <

The University of Ontario The University of Ontario 5-47 Interactive (external) forces Snakes originally developed for interactive segmentation Initial snake result can be nudged where it goes wrong. Simply add extra external energy terms to – pull nearby points toward cursor, or – push nearby points away from cursor

The University of Ontario The University of Ontario 5-48 Interactive (external) forces n Pull points towards cursor: Nearby points get pulled hardest Negative sign gives better energy for positions near p

The University of Ontario The University of Ontario 5-49 Interactive (external) forces n Push points from cursor: Nearby points get pushed hardest Positive sign gives better energy for positions far from p

The University of Ontario The University of Ontario 5-50 Dynamic snakes n Adding motion parameters as variables (for each snake node) n Introduce energy terms for motion consistency n primarily useful for tracking (nodes represent real tissue elements with mass and kinematic energy)

The University of Ontario The University of Ontario 5-51 Open vs. closed snakes n When using an open curve we can impose constraints on the end points (e.g. end points may have fixed position) Q: What is similar or different with the live-wire if the end points of an open snake are fixed? open curve closed curve assumes

The University of Ontario The University of Ontario 5-52 Optimization of snakes n At each iteration we compute a new snake position within proximity to the previous snake n New snake energy should be smaller than the previous one n Stop when the energy can not be decreased within local neighborhood of the snake (local energy minima) Optimization Methods 1. Gradient Descent 2. Dynamic Programming

The University of Ontario The University of Ontario 5-53 Toy example: local optimization for function of one (scalar) variable assume some energy function f(x) describing snake’s “quality” f(x) “derivative descent” for scalar functions local minima for f(x) Move from x i in the direction where the function decreases (left or right, depending of the sign of derivative f’ at x i )

The University of Ontario The University of Ontario 5-54 for functions of two or more variables: Gradient Descent - direction of (negative) gradient at point (x,y) is direction of the (steepest) descent towards lower values of function E n Example: minimization of functions of two variables - magnitude of gradient at (x,y) gives the value of the slope

The University of Ontario The University of Ontario 5-55 Gradient Descent n Example: minimization of functions of two variables Stop at a local minima where update equation for a point p=(x,y) BTW: mean-shift is example of gradient ascend towards modes (i.e. maxima) of data density (function) gradient descent are iterative moves towards function minima

The University of Ontario The University of Ontario 5-56 Gradient Descent n Example: minimization of functions of two variables sensitivity to initialisation !! In general, gradient descent uses the same equation for functions E(p) where p contains more than two variables, but it is harder to illustrate. YET: gradient descent for snakes can be nicely visualized by a “vector field”…

The University of Ontario The University of Ontario 5-57 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy C here, energy is a function of 2n variables C

The University of Ontario The University of Ontario updates can be written for each node 5-58 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy update equation for the whole snake C here, energy is a function of 2n variables C

The University of Ontario The University of Ontario 5-59 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy update equation for each node C here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field updates can be written for each node

The University of Ontario The University of Ontario 5-60 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy Q: Do points move independently? = ? update equation for each node C NO, motion of point i depends on positions of neighboring points here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field

The University of Ontario The University of Ontario 5-61 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy = ? update equation for each node C from exterior (image) energy from interior (smoothness) energy here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field

The University of Ontario The University of Ontario 5-62 Gradient Descent for Snakes (updates at each iteration) simple elastic snake energy = ? update equation for each node C motion of v i towards higher magnitude of image gradients motion of v i reducing contour’s bending This term for v i depends on neighbors v i-1 and v i+1 here, energy is a function of 2n variables C snake energy gradient can be visualized as a vector field optional slide

The University of Ontario The University of Ontario 5-63 “Gradient Flow” of snakes contour evolution via gradient flow C C’ Stopping criteria: local minima of energy E update equation for each node snake energy gradient can be visualized as a vector field

The University of Ontario The University of Ontario 5-64 Difficulties with Gradient Descent n Very difficult to obtain accurate estimates of high-order derivatives on images (due to noise) E.g., estimating requires computation of second image derivatives n Gradient descent is not trivial even for functions over R 1. Robust numerical performance in R 2n may be problematic. Choice of parameter is non-trivial –Small, the algorithm may be too slow –Large, the algorithm may never converge Even when “converged” to a good local minima, the snake oscillates near it

The University of Ontario The University of Ontario 5-65 Alternative solution for 2D snakes: Dynamic Programming (DP) n Basic elastic snake energy can be written as a sum of pair-wise interaction potentials n More generally, snake energy is a sum of higher-order interaction potentials (e.g. triple interactions).

The University of Ontario The University of Ontario 5-66 Snake energy: pair-wise interactions Example: basic elastic snake energy where Q: give an example of snake with triple-interaction potentials?

The University of Ontario The University of Ontario 5-67 DP Snakes control points Energy E is minimized via Dynamic Programming in locally restricted search space [Amini, Weymouth, Jain, 1990] First-order interactions

The University of Ontario The University of Ontario 5-68 DP Snakes control points [Amini, Weymouth, Jain, 1990] Iterate until optimal position at each point is in the box center, i.e. the current snake is optimal in the local search space First-order interactions Energy E is minimized via Dynamic Programming in locally restricted search space

The University of Ontario The University of Ontario 5-69 Dynamic Programming (DP) Viterbi Algorithm Complexity:, Worst case = Best Case Here we focus on first-order interactions states 1 2 … m sites (assume open snake!) - internal “energy counters” at node i and state k

The University of Ontario The University of Ontario 5-70 Dynamic Programming and Hidden Markov Models (HMM) n DP is widely used in speech recognition time audible signal word1 word2 word3 word4 ordered (in time) hidden variables (words) to be estimated from observed signal

The University of Ontario The University of Ontario 5-71 Snakes can also be seen as Hidden Markov Models (HMM) n Positions of snake nodes are hidden variables n Timely order is replaced with spatial order n Observed audible signal is replaced with image

The University of Ontario The University of Ontario 5-72 Dynamic Programming for a closed snake? Clearly, DP can be applied to optimize an open ended snake Can we use DP for a “looped” energy in case of a closed snake?

The University of Ontario The University of Ontario 5-73 Dynamic Programming for a closed snake 1.Can use Viterbi to optimize snake energy in case is fixed. (in this case the energy above effectively has no loop) 2.Use Viterbi to optimize snake for all possible values of c and choose the best of the obtained m solutions. for exact solution complexity increases to O(nm 3 )

The University of Ontario The University of Ontario 5-74 Dynamic Programming for a closed snake DP has problems with “loops” (even one loop increases complexity). However, some approximation tricks can be used in practice… 1.Use DP to optimize snake energy with fixed (according to a given initial snake position). 2.Use DP to optimize snake energy again. This time fix position of an intermediate node where is an optimal position obtained in step 1. This is only an approximation, but complexity is good: O(nm 2 )

The University of Ontario The University of Ontario 5-75 Dynamic Programming for snakes with higher order interactions (e.g. if bending energy is added into the “model” of the snake) Viterbi algorithm can be generalized to 3-clique case but its complexity increases to O((n-1)m 3 ). one approach: combine each pair of neighboring nodes into one super node (m 2 states). Each triple interaction can be represented as a pair-wise interaction between 2 super-nodes. Viterbi algorithm needs only m 3 operations for each super node (why?)

The University of Ontario The University of Ontario 5-76 DP snakes (open case) Summary of Complexity energy type complexity (order of interactions) unary potentials O(nm) (d=1) pair-wise potentials O((n-1)m 2 ) (d=2) triple potentials O((n-2)m 3 ) (d=3) complete connectivity O(m n ) – exhaustive search (d=n) * - adding a single loop increases complexity by factor m d-1 * *

The University of Ontario The University of Ontario 5-77 Problems with snakes n May be sensitive to initialization –may get stuck in a local energy minimum near initial contour n Numerical stability can be an issue for gradient descent E.g. requires computing second order derivatives n The general concept of snakes (deformable models) does generalize to 3D (deformable mesh), but some robust optimization methods suitable for 2D snakes do not apply in 3D E.g.: dynamic programming only works for 2D snakes

The University of Ontario The University of Ontario 5-78 Problems with snakes n Depends on number and spacing of control points n Not trivial to prevent curve self intersecting n Can not follow topological changes of objects more examples in the next slides

The University of Ontario The University of Ontario 6-79 Cremers, Tischhäuser, Weickert, Schnörr, “Diffusion Snakes”, IJCV '02 [a slide borrowed from Daniel Cremers] Problems with snakes

The University of Ontario The University of Ontario 6-80 Cremers, Tischhäuser, Weickert, Schnörr, “Diffusion Snakes”, IJCV '02 [a slide borrowed from Daniel Cremers] Problems with snakes

The University of Ontario The University of Ontario 6-81 Fixed topology requires heuristic splitting mechanisms Insufficient resolution / control point density requires control point regridding mechanisms [a slide borrowed from Daniel Cremers] Problems with snakes

The University of Ontario The University of Ontario 5-82 Problems with snakes n External energy: may need to diffuse image gradients, otherwise the snake does not really “see” object boundaries in the image unless it gets very close to it. image gradients are large only directly on the boundary

The University of Ontario The University of Ontario 5-83 Diffusing Image Gradients image gradients diffused via Gradient Vector Flow (GVF) Chenyang Xu and Jerry Prince, 98

The University of Ontario The University of Ontario 5-84 Alternative Way to Improve External Energy n Use instead of where D( ) is Distance Transform (for detected binary image features, e.g. edges) binary image features (edges) Distance Transform Distance Transform can be visualized as a gray- scale image Generalized Distance Transform (directly for image gradients)

The University of Ontario The University of Ontario 5-85 Distance Transform (see p of the text book) Distance Transform Image features (2D) Distance Transform is a function that for each image pixel p assigns a non-negative number corresponding to distance from p to the nearest feature in the image I

The University of Ontario The University of Ontario 5-86 Distance Transform can be very efficiently computed

The University of Ontario The University of Ontario 5-87 Distance Transform can be very efficiently computed

The University of Ontario The University of Ontario Distance Transform can be very efficiently computed 5-88 Forward-Backward pass algorithm computes shortest paths in O(n) on a grid graph with regular 4-N connectivity and homogeneous edge weights 1 Alternatively, Dijkstra’s algorithm can also compute a distance map (trivial generalization for multiple sources), but it would take O(n*log(n)). - Dijkstra is slower but it is a more general method applicable to arbitrary weighted graphs

The University of Ontario The University of Ontario 5-89 Distance Transform: an alternative way to think about n Assuming then is standard Distance Transform (of image features) Locations of binary image features

The University of Ontario The University of Ontario 5-90 Distance Transform vs. Generalized Distance Transform n For general is called Generalized Distance Transform of F(p) may represent non-binary image features (e.g. image intensity gradient) D(p) may prefer “strength” of F(p) to proximity qp

The University of Ontario The University of Ontario 5-91 Generalized Distance Transforms (see Felzenszwalb and Huttenlocher, IJCV 2005) n The same “Forward-Backward” algorithm can be applied to any initial array Binary ( ) initial values are non-essential. n If the initial array contains values of function F(x,y) then the output of the “Forward-Backward” algorithm is a Generalized Distance Transform n “Scope of attraction” of image gradients can be extended via external energy based on a generalized distance transform of

The University of Ontario The University of Ontario 5-92 Metric properties of discrete Distance Transforms Forward mask Backward mask Manhattan (L1) metric Set of equidistant points Metric Better approximation of Euclidean metric In fact, “exact” Euclidean Distance transform can be computed fairly efficiently (in linear or near-linear time) without bigger masks 1) 2) Fast Marching Method –Tsitsiklis, Sethian Euclidean (L2) metric

The University of Ontario The University of Ontario 5-93 Summary n Boundary regularization live-wire snakes n Optimization is not trivial gradient descent Dynamic Programming (DP) –second (and higher-) order energies –no loops (e.g. can be done on trees) n Generalized distance maps Next topic: combining color and boundary