Download presentation
Presentation is loading. Please wait.
Published byDanny Ridall Modified over 10 years ago
1
Learning to Combine Bottom-Up and Top-Down Segmentation Anat Levin and Yair Weiss School of CS&Eng, The Hebrew University of Jerusalem, Israel
2
Bottom-up segmentation Malik et al, 2000 Sharon et al, 2001 Comaniciu and Meer, 2002 … Bottom-up approaches: Use low level cues to group similar pixels
3
Bottom-up segmentation is ill posed Some segmentation example (maybe horses from Eran’s paper) Many possible segmentation are equally good based on low level cues alone. images from Borenstein and Ullman 02
4
Top-down segmentation Class-specific, top-down segmentation (Borenstein & Ullman Eccv02) Winn and Jojic 05 Leibe et al 04 Yuille and Hallinan 02. Liu and Sclaroff 01 Yu and Shi 03
5
Combining top-down and bottom-up segmentation Find a segmentation: 1.Similar to the top-down model 2.Aligns with image edges +
6
Previous approaches Borenstein et al 04 Combining top-down and bottom up segmentation. Tu et al ICCV03 Image parsing: segmentation, detection, and recognition. Kumar et al CVPR05 Obj-Cut. Shotton et al ECCV06: TextonBoost independently Previous approaches: Train top-down and bottom-up models independently
7
Why learning top-down and bottom-up models simultaneously? Large number of freedom degrees in tentacles configuration- requires a complex deformable top down model On the other hand: rather uniform colors- low level segmentation is easy
8
simultaneouslyLearn top-down and bottom-up models simultaneously Reduces at run time to energy minimization with binary labels (graph min cut) Our approach
9
Energy model Consistency with fragments segmentation Segmentation alignment with image edges
10
Energy model Segmentation alignment with image edges Consistency with fragments segmentation
11
Energy model Segmentation alignment with image edges Resulting min-cut segmentation Consistency with fragments segmentation
12
Learning from segmented class images Training data : Goal: Learn fragments for an energy function
13
Learning energy functions using conditional random fields Theory of CRFs: Lafferty et al 2001 LeCun and Huang 2005 CRFs For vision: Kumar and Hebert 2003 Ren et al 2006 He et al 2004, 2006 Quattoni et al 2005 Torralba et al 04
14
E(x) Minimize energy of true segmentation Maximize energy of all other configurations Learning energy functions using conditional random fields “It's not enough to succeed. Others must fail.” –Gore Vidal
15
Minimize energy of true segmentation Maximize energy of all other configurations P(x) Learning energy functions using conditional random fields “It's not enough to succeed. Others must fail.” –Gore Vidal
16
Differentiating CRFs log-likelihood Log-likelihood gradients with respect to : Expected feature response minus observed feature response Log-likelihood is convex with respect to Yair- in the original version of this slide I had another equation expressing the expectation as a sum of marginals (see next hidden slide). At least for me, it wasn’t originally clear what this expectation means before I saw the other equation. However, I try to delete un necessary equations..
17
CRFs cost- evaluating partition function Derivatives- evaluating marginal probabilities Use approximate estimations: Sampling Belief Propagation and Bethe free energy Used in this work: Tree reweighted belief propagation and Tree reweighted upper bound (Wainwright et al 03) Conditional random fields-computational challenges
18
Fragments selection Candidate fragments pool: Greedy energy design:
19
Fragments selection challenges Straightforward computation of likelihood improvement is impractical 2000 Fragments 50 Training images 10 Fragments selection iterations 1,000,000 inference operations!
20
Fragments selection Fragment with low error on the training set First order approximation to log-likelihood gain: Fragment not accounted for by the existing model Similar idea in different contexts: Zhu et al 1997 Lafferty et al 2004 McCallum 2003
21
Requires a single inference process on the previous iteration energy to evaluate approximations with respect to all fragments First order approximation evaluation is linear in the fragment size First order approximation to log-likelihood gain: Fragments selection
22
Fragments selection- summary Initialization: Low- level term For k=1:K Run TRBP inference using the previous iteration energy. Approximate likelihood gain of candidate fragments Add to energy the fragment with maximal gain.
23
Training horses model
24
Training horses model-one fragment
25
Training horses model-two fragments
26
Training horses model-three fragments
27
Results- horses dataset
28
Fragments number Mislabeled pixels percent Comparable to previous results (Kumar et al, Borenstein et al.) but with far fewer fragments
29
Results- artificial octopi
30
Results- cows dataset From the TU Darmstadt Database
31
Results- cows dataset Fragments number Mislabeled pixels percent
32
Conclusions Simultaneously learning top-down and bottom-up segmentation cues. Learning formulated as estimation in Conditional Random Fields Novel, efficient fragments selection algorithm Algorithm achieves state of the art performance with a significantly smaller number of fragments
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.