Shape Classification Alex Yakubovich Elderlab Oct 7, 2011 John Wilder, Jacob Feldman, Manish Singh, Superordinate shape classification using natural shape.

Slides:



Advertisements
Similar presentations
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Advertisements

Detecting Faces in Images: A Survey
Introduction Training Complexity, Pruning CART vs. ID3 vs. C4.5
By: Michael Vorobyov. Moments In general, moments are quantitative values that describe a distribution by raising the components to different powers.
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
K Means Clustering , Nearest Cluster and Gaussian Mixture
Model Assessment, Selection and Averaging
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
What is Statistical Modeling
Shape Classification Based on Skeleton Path Similarity Xingwei Yang, Xiang Bai, Deguang Yu, and Longin Jan Latecki.
Assuming normally distributed data! Naïve Bayes Classifier.
Region labelling Giving a region a name. Image Processing and Computer Vision: 62 Introduction Region detection isolated regions Region description properties.
Classification and risk prediction
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
ROC Curves.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
Chapter 11 Representation and Description. Preview Representing a region involves two choices: In terms of its external characteristics (its boundary)
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Classification. An Example (from Pattern Classification by Duda & Hart & Stork – Second Edition, 2001)
1 A Bayesian Method for Guessing the Extreme Values in a Data Set Mingxi Wu, Chris Jermaine University of Florida September 2007.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE.
ENT 273 Object Recognition and Feature Detection Hema C.R.
Building local part models for category-level recognition C. Schmid, INRIA Grenoble Joint work with G. Dorko, S. Lazebnik, J. Ponce.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
Object Detection with Discriminatively Trained Part Based Models
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
In Defense of Nearest-Neighbor Based Image Classification Oren Boiman The Weizmann Institute of Science Rehovot, ISRAEL Eli Shechtman Adobe Systems Inc.
Data Reduction via Instance Selection Chapter 1. Background KDD  Nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable.
Optimal Bayes Classification
MLE’s, Bayesian Classifiers and Naïve Bayes Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 30,
Tell Me What You See and I will Show You Where It Is Jia Xu 1 Alexander G. Schwing 2 Raquel Urtasun 2,3 1 University of Wisconsin-Madison 2 University.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
Unsupervised Mining of Statistical Temporal Structures in Video Liu ze yuan May 15,2011.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Occam’s Razor No Free Lunch Theorem Minimum.
Introduction to Machine Learning Multivariate Methods 姓名 : 李政軒.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Shape2Pose: Human Centric Shape Analysis CMPT888 Vladimir G. Kim Siddhartha Chaudhuri Leonidas Guibas Thomas Funkhouser Stanford University Princeton University.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
CH 5: Multivariate Methods
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Clustering Evaluation The EM Algorithm
SA3202 Statistical Methods for Social Sciences
Revision (Part II) Ke Chen
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Additional notes on random variables
Additional notes on random variables
Authors: Wai Lam and Kon Fan Low Announcer: Kyu-Baek Hwang
Hierarchical Models and
Representation and Description
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Volume 23, Issue 21, Pages (November 2013)
Presentation transcript:

Shape Classification Alex Yakubovich Elderlab Oct 7, 2011 John Wilder, Jacob Feldman, Manish Singh, Superordinate shape classification using natural shape statistics, Cognition, Volume 119, Issue 3, June 2011, Pages

Table of Contents Introduction Ground Truth (Psychophysics) Modeling – Feature Extraction – Classifier Design – Evaluation Analysis

Motivation Human object categories are hierarchical Superordinate Subordinate Basic

Superordinate Classification Superordinate classification is important – Knowing only rough category allows rapid response Nontrivial problem – Broad classes  large intra-class variability Humans do it well  rapid animal detection (Thorpe ‘02) – 20 ms: Animal or not? – 150 ms: which animal?

Parameterization of Shape Shape is the feature explored in this paper Many representations: “None have been quantitavely validated with respect to ecological shape categories” Fourier descriptors boundary moments shape grammarschain codes geons codons convex hull- deficiency shape matrices medial axis

Medial Axis Representation Proposed by Blum (1973) Encode shape using by its axis of symmetry (skeleton)

Medial Axis Representation Advantages – Psychophysical correlates (Kovacs & Julesz, ’98,’94) – Captures part structure of shapes  evidence that human representation relies on this (Barenholtz & Feldman ’03, Hoffman & Richards ’84…); – Amenable to computation – Stable - if computed using Bayesian estimator (Feldman & Sing ‘06)

Bayesian skeleton estimation (Feldman & Singh 2006) Problem: given a shape {x 1 …x N }, compute most probable skeleton Build a prior over the skeletons Assumptions. – Shape is generated by K axes. – In each axis, successive turning angles are independent – axes (branches) sprout with equal probability

Prior Model Axes (branches) Turning Angle Points on axis

Likelihood Model

Contributions Task: Classify salient contour as one of {animal, leaf}. Evaluate human performance Design and optimize Naïve Bayes Classifier, using the medial axis parameters as features Compare performance against ground truth.

Shape databases a)Brown LEMS lab animal database b)Smithsonian leaves database

Psychophysics When establishing ground truth, subjects shouldn’t use overt recognition To guarantee this, morph animal & leaf shapes into ‘blobs’ 100% animal 0% animal 250 animals, 250 leaves Select 2 randomly Sample equally (n=150) Align (match principal axes) Weighted average of matching points Proportion animal = {.3,.4,.5,.6,.7}

Psychophysics Independen t variables Dependent variable P(animal response) weights skeletal parameters Each subject sees 500 blobs – Random Pairing/weights Which is more “likely” to be animal or leaf?

Subject classification vs. Mixture proportion “subjects were both consistent and effective at recovering the true source of the morphed shape”

Psychophysics Remark: Evidence that mechanism for rapid classification (feed-forward) is distinct from slower processing (feedback) Repeated experiment with shorter stimulus durations, and mask following stimuli Same conclusion

Feature Selection Possible parameters of MAP skeleton: Par 1: # of branches Par 2: Max depth of skeleton Par 3: mean depth Par 6: mean length of axes / root

Feature Selection To avoid redundancy, only consider parameters whose distributions differ between the two classes – Wilcoxon rank-sum test (α=.01)

Feature Selection Par 1: # of branches Par 2: max depth of skeleton Par 3: mean depth Par 6: mean length of axes / root

Classifier Design Bayesian analysis – For shape parameter x i :

Classifier Design To pool information from k parameters, assume independence: Decision Rule:

Model Selection Reduce number of parameters using AIC: best model (according to AIC)={1,8} – X1 = # skeletal branches – X8 = total signed turning angle Agrees with BIC Max likelihood for given model # model parameters

Training The naïve Bayes classifier is trained on “unadulterated” shapes The best model reached 81% classification accuracy How will humans perform on such shapes? – Run another test group on shapes with weights = {0,1} – 88.8 ± 0.3 % accuracy

Evaluation Classifier is evaluated on database of morphed shapes Good agreement with ground truth “the human subjective judgment of “animalishness” correspond closely to the Bayesian estimate of the probability of the animal class”

Evaluation Good fit with few parameters  parameters well chosen Can we do better by using non skeletal-parameters? Options: – Aspect Ratio – Compactness (perimeter 2 /area) – Symmetry Hausdorff distance between two halves of shape (min. over all reflection axes) – Contour complexity measure (Feldman & Singh ‘05) Repeat earlier analysis

Evaluation Significant difference between animal/leaf classes for all 4 parameters (Wilcoxon rank- sum test)

Evaluation Train classifier over non-skeletal parameters Test: compare to human judgment Only complexity measure fits significantly, but the fit is worse than with skeletal parameters – R 2 = 0.55 < 0.71

Conclusion “subjects’ classification of shapes into our 2 natural categories can be well modeled by a Bayesian classifier with a very small number of shape skeletal parameters, in which the model’s assumptions are consistent with the empirical distribution in naturally-occurring shapes.” Classification (leafishness vs. animalishness) is done by applying stereotypes of shapes in each category Call for ‘naturalization’ of shape representations