Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Mean-Field Theory and Its Applications In Computer Vision1 1.
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Recovering Human Body Configurations: Combining Segmentation and Recognition Greg Mori, Xiaofeng Ren, and Jitentendra Malik (UC Berkeley) Alexei A. Efros.
Exact Inference in Bayes Nets
Supervised Learning Recap
Computer vision: models, learning and inference
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Learning to Detect A Salient Object Reporter: 鄭綱 (3/2)
Pattern Recognition and Machine Learning
Computer Vision Group University of California Berkeley 1 Learning Scale-Invariant Contour Completion Xiaofeng Ren, Charless Fowlkes and Jitendra Malik.
Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
1 Learning to Detect Natural Image Boundaries David Martin, Charless Fowlkes, Jitendra Malik Computer Science Division University of California at Berkeley.
CVR05 University of California Berkeley 1 Familiar Configuration Enables Figure/Ground Assignment in Natural Scenes Xiaofeng Ren, Charless Fowlkes, Jitendra.
Berkeley Vision GroupNIPS Vancouver Learning to Detect Natural Image Boundaries Using Local Brightness,
CVR05 University of California Berkeley 1 Cue Integration in Figure/Ground Labeling Xiaofeng Ren, Charless Fowlkes, Jitendra Malik.
Belief Propagation, Junction Trees, and Factor Graphs
1 The Ecological Statistics of Grouping by Similarity Charless Fowlkes, David Martin, Jitendra Malik Computer Science Division University of California.
Computer Vision Group University of California Berkeley 1 Scale-Invariant Random Fields for Mid-level Vision Xiaofeng Ren, Charless Fowlkes and Jitendra.
Probabilistic Models for Parsing Images Xiaofeng Ren University of California, Berkeley.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
MSRI University of California Berkeley 1 Recovering Human Body Configurations using Pairwise Constraints between Parts Xiaofeng Ren, Alex Berg, Jitendra.
Belief Propagation Kai Ju Liu March 9, Statistical Problems Medicine Finance Internet Computer vision.
1 Occlusions – the world is flat without them! : Learning-Based Methods in Vision A. Efros, CMU, Spring 2009.
1 How do ideas from perceptual organization relate to natural scenes?
Computer Vision Group University of California Berkeley 1 Cue Integration in Figure/Ground Labeling Xiaofeng Ren, Charless Fowlkes and Jitendra Malik.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Information that lets you recognise a region.
Cue Integration in Figure/Ground Labeling Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, U.C. Berkeley We present a model of edge and region grouping.
Heather Dunlop : Advanced Perception January 25, 2006
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/31/15.
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/24/10.
Recognition using Regions (Demo) Sudheendra V. Outline Generating multiple segmentations –Normalized cuts [Ren & Malik (2003)] Uniform regions –Watershed.
1 Contours and Junctions in Natural Images Jitendra Malik University of California at Berkeley (with Jianbo Shi, Thomas Leung, Serge Belongie, Charless.
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
TEMPLATE DESIGN © Zhiyao Duan 1,2, Lie Lu 1, and Changshui Zhang 2 1. Microsoft Research Asia (MSRA), Beijing, China.2.
Markov Random Fields Probabilistic Models for Images
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Module networks Sushmita Roy BMI/CS 576 Nov 18 th & 20th, 2014.
Chapter 4: Pattern Recognition. Classification is a process that assigns a label to an object according to some representation of the object’s properties.
Modern Boundary Detection II Computer Vision CS 143, Brown James Hays Many slides Michael Maire, Jitendra Malek Szeliski 4.2.
Maximum Entropy Models and Feature Engineering CSCI-GA.2590 – Lecture 6B Ralph Grishman NYU.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Regress-itation Feb. 5, Outline Linear regression – Regression: predicting a continuous value Logistic regression – Classification: predicting a.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
A New Method for Crater Detection Heather Dunlop November 2, 2006.
Markov Random Fields & Conditional Random Fields
Logistic Regression Saed Sayad 1www.ismartsoft.com.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Image segmentation.
KNN & Naïve Bayes Hongning Wang
Learning Coordination Classifiers
Nonparametric Semantic Segmentation
Saliency detection Donghun Yeo CV Lab..
Computer Vision Lecture 4: Color
Learning to Combine Bottom-Up and Top-Down Segmentation
Markov Random Fields for Edge Classification
“Traditional” image segmentation
Presentation transcript:

Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity and the frequency of different junction types. Potential completions are generated by building a constrained Delaunay triangulation (CDT) over the set of contours found by a local edge detector (Pb). Maximum likelihood parameters for the model are learned from human labeled ground-truth. Using held out test data, we measure how the model, by incorporating continuity structure, improves boundary detection over the local edge detector. We also compare performance with a baseline local classifier that operates on pairs of edgels. Both algorithms consistently dominate the low-level boundary detector at all thresholds. To our knowledge, this is the first time that curvilinear continuity has been shown quantitatively useful for a large variety of natural images. Better boundary detection has immediate application in the problem of object detection and recognition. Use P human the soft ground-truth label defined on CDT graphs: precision close to 100% Pb averaged over CDT edges: no worse than the original Pb Increase in asymptotic recall rate: completion of gradientless contours ImagePbLocalGlobal hierarchy of parts viewing distance Desired Properties: 1)Scale invariance. -Supported by natural image statistics (e.g. power law distributions) 2) Avoid too many spurious completions - Output should be better than the input! Scale-Invariant Contour Completion Using Condition Random Fields Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, UC Berkeley “Bi-gram” model:  contrast + continuity  binary classification (0,0) vs (1,1) logistic classifier “Tri-gram” model: 11 22 LL  Pb L = Xe Pb > 0.2 CDT CDT edges capture most of the image boundaries Curvilinear continuity improves boundary detection A Local Classifier A Global Random Field: Contour Completion in Natural Scenes Our Solution: Trace detected edges, recursively split contours based on angle and generate potential completions using constrained Delaunay triangulation (CDT). Scale-invariant construction with a small number of potential completions. We evaluate the performance of local and global models on three different segmentation datasets. We project each CDT edge back down onto the pixel grid and measure the tradeoff between precision and recall of human marked boundaries. We find that the local model yields a gain in performance but is dominated by the CRF marginals. Our baseline continuity model uses the average contrast and angle between neighboring edges to estimate a posterior probability for each edge in the CDT graph independently. We also consider a conditional random field (CRF) with a binary random variable X e for each edge in the CDT. Singleton potentials incorporate the average contrast while junction potentials assign an energy to each possible configuration of edges incident on a vertex V. When only two edges are turned on, the junction potential also incorporates the angle between them. Low contrast boundaries Low contrast boundaries are included in potential completions! Maximum likelihood CRF parameters are fit via gradient descent. We use loopy belief propagation to perform inference, in particular estimating edge marginals P(X e ).