Statistical Classification Methods

Slides:



Advertisements
Similar presentations
Support Vector Machine & Its Applications
Advertisements

Introduction to Support Vector Machines (SVM)
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Abhishek Sharma Dept. of EEE BIT Mesra Aug 16, 2010 Course: Neural Network Professor: Dr. B.M. Karan Semester.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
CSE & CSE Multimedia Processing Lecture 09 Pattern Classifier and Evaluation for Multimedia Applications Spring 2009 New Mexico Tech.
An Introduction of Support Vector Machine

An Introduction of Support Vector Machine
Support Vector Machines
SVM—Support Vector Machines
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
LOGO Classification IV Lecturer: Dr. Bo Yuan
Decision Trees Jeff Storey. Overview What is a Decision Tree Sample Decision Trees How to Construct a Decision Tree Problems with Decision Trees Decision.
Discriminative and generative methods for bags of features
1 Machine Learning Support Vector Machines. 2 Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes.
Support Vector Machine
Support Vector Machines and Kernel Methods
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Support Vector Machines Kernel Machines
Support Vector Machines and Kernel Methods
Support Vector Machines
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Support Vector Machine & Image Classification Applications
Support Vector Machines Mei-Chen Yeh 04/20/2010. The Classification Problem Label instances, usually represented by feature vectors, into one of the predefined.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE: Support Vector Machines.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Support Vector Machine PNU Artificial Intelligence Lab. Kim, Minho.
Classification Slides from Heng Ji Classification Define classes/categories Define classes/categories Label text Label text Extract features Extract.
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
An Introduction to Support Vector Machine (SVM)
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Dec 21, 2006For ICDM Panel on 10 Best Algorithms Support Vector Machines: A Survey Qiang Yang, for ICDM 2006 Panel Partially.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Big Data Analysis and Mining Qinpei Zhao 赵钦佩 2015 Fall Decision Tree.
6.S093 Visual Recognition through Machine Learning Competition Image by kirkh.deviantart.com Joseph Lim and Aditya Khosla Acknowledgment: Many slides from.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Decision Trees MSE 2400 EaLiCaRA Dr. Tom Way.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Roughly overview of Support vector machines Reference: 1.Support vector machines and machine learning on documents. Christopher D. Manning, Prabhakar Raghavan.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Kernels Slides from Andrew Moore and Mingyue Tan.
An Introduction of Support Vector Machine Courtesy of Jinwei Gu.
Support Vector Machine & Its Applications. Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications  Gene Expression Data Classification.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines
Support Vector Machines
Machine Learning Week 2.
Support Vector Machines
Support Vector Machine & Its Applications
Support Vector Machine
Decision Trees Jeff Storey.
SVMs for Document Ranking
Presentation transcript:

Statistical Classification Methods Introduction k-nearest neighbor Neural networks Decision trees Support Vector Machine

What is a Decision Tree? An inductive learning task Use particular facts to make more generalized conclusions A predictive model based on a branching series of Boolean tests These smaller Boolean tests are less complex than a one-stage classifier Let’s look at a sample decision tree…

Predicting Commute Time Leave At If we leave at 10 AM and there are no cars stalled on the road, what will our commute time be? 10 AM 9 AM 8 AM Stall? Accident? Long No Yes No Yes Short Long Medium Long

Inductive Learning In this decision tree, we made a series of Boolean decisions and followed the corresponding branch Did we leave at 10 AM? Did a car stall on the road? Is there an accident on the road? By answering each of these yes/no questions, we then came to a conclusion on how long our commute might take

Decision Trees as Rules We did not have represent this tree graphically We could have represented as a set of rules. However, this may be much harder to read…

Decision Tree as a Rule Set if hour == 8am commute time = long else if hour == 9am if accident == yes else commute time = medium else if hour == 10am if stall == yes commute time = short Notice that all attributes to not have to be used in each path of the decision. As we will see, all attributes may not even appear in the tree.

How to Create a Decision Tree We first make a list of attributes that we can measure These attributes (for now) must be discrete We then choose a target attribute that we want to predict Then create an experience table that lists what we have seen in the past

Sample Experience Table Example Attributes Target   Hour Weather Accident Stall Commute D1 8 AM Sunny No Long D2 Cloudy Yes D3 10 AM Short D4 9 AM Rainy D5 D6 D7 D8 Medium D9 D10 D11 D12 D13

Choosing Attributes The previous experience decision table showed 4 attributes: hour, weather, accident and stall But the decision tree only showed 3 attributes: hour, accident and stall Why is that?

Choosing Attributes Methods for selecting attributes (which will be described later) show that weather is not a discriminating attribute We use the principle of Occam’s Razor: Given a number of competing hypotheses, the simplest one is preferable

Choosing Attributes The basic structure of creating a decision tree is the same for most decision tree algorithms The difference lies in how we select the attributes for the tree We will focus on the ID3 algorithm developed by Ross Quinlan in 1975

Decision Tree Algorithms The basic idea behind any decision tree algorithm is as follows: Choose the best attribute(s) to split the remaining instances and make that attribute a decision node Repeat this process for recursively for each child Stop when: All the instances have the same target attribute value There are no more attributes There are no more instances

Identifying the Best Attributes Refer back to our original decision tree Leave At 9 AM 10 AM 8 AM Accident? Stall? Long Yes No Yes No Short Long Medium Long How did we know to split on leave at and then on stall and accident and not weather?

ID3 Heuristic To determine the best attribute, we look at the ID3 heuristic ID3 splits attributes based on their entropy. Entropy is the measure of disinformation…

Entropy Entropy is minimized when all values of the target attribute are the same. If we know that commute time will always be short, then entropy = 0 Entropy is maximized when there is an equal chance of all values for the target attribute (i.e. the result is random) If commute time = short in 3 instances, medium in 3 instances and long in 3 instances, entropy is maximized

Entropy Calculation of entropy Entropy(S) = ∑(i=1 to l)-|Si|/|S| * log2(|Si|/|S|) S = set of examples Si = subset of S with value vi under the target attribute l = size of the range of the target attribute

ID3 ID3 splits on attributes with the lowest entropy We calculate the entropy for all values of an attribute as the weighted sum of subset entropies as follows: ∑(i = 1 to k) |Si|/|S| Entropy(Si), where k is the range of the attribute we are testing We can also measure information gain (which is inversely proportional to entropy) as follows: Entropy(S) - ∑(i = 1 to k) |Si|/|S| Entropy(Si)

ID3 Given our commute time sample set, we can calculate the entropy of each attribute at the root node Attribute Expected Entropy Information Gain Hour 0.6511 0.768449 Weather 1.28884 0.130719 Accident 0.92307 0.496479 Stall 1.17071 0.248842

Decision Trees: Random Forest Classifier Random forests (RF) are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. growing an ensemble of trees and letting them vote for the most popular class. Decision trees

Random Forest- Example Training Data M features N examples As the classifier of choice we adopt Random Forest Classifier, due to its robustness to heterogenous and noisy feature. Random Forest is an ensembe classifier. Briefly, given N data and M features, bootsrap samples are created from the traning data Decision trees

Random Forest Classifier Create bootstrap samples from the training data M features N examples ....… Decision trees

Random Forest Classifier Construct a decision tree M features N examples From eachd descision tree .. In spliting the nodes, the Gini Gain is employed ....… Decision trees

Random Forest Classifier At each node in choosing the split feature choose only among m<M features M features N examples Different from the regular decision trees in random forest, when the splitting feature is chosen from only a subset of allf eatures. The robostness of the classifier arises from bootsraping of the training data and the random selection of features, the the random choose of features. ....… Decision trees

Random Forest Classifier Create decision tree from each bootstrap sample N examples ....… M features Decision trees

Random Forest Classifier M features Take he majority vote N examples Finally the given an input data the decision is made as follows given an input data, ....… ....… Decision trees

Random Forest: Algorithm Steps Training Samples N samples M features Extract m from M Randomly choose a training set for this tree by estimated the prediction error of the tree Randomly choose m Calculate the best split based on m in the training set Each tree is fully grown and not pruned A test sample RF Model (Many Decision Trees) Class label To create training model (RF), many decision trees are needed, for each decision tree, we do following steps: Decision trees

On Class Practice 1 Data Method Software Step Iris.arff RF Parameter (Select by yourself) Software weka Step Explorer->Classify->Classifier (Trees – RandomForest )

Alternating Decision Tree Two Components: Decision nodes - specify a predicate condition. Prediction nodes - contain a single number. Note: Only for 2 class

ADT: Example Is that a spam email? Prediction > 0 Spam Prediction < 0 Normal Decision node Prediction node AD Tree

ADT: Example Total score = 0.657 > 0 , so the instance is a spam. An instance to be classified Feature Value char_freq_bang 0.08 word_freq_hp 0.4 capital_run_length_longest 4 char_freq_dollar word_freq_remove 0.9 word_freq_george Other features ... Score for the above instance Iteration 1 2 3 4 5 6 Instance values N/A .08 < .052 = f .4 < .195 = f 0 < .01 = t 0 < 0.005 = t .9 < .225 = f Prediction -0.093 0.74 -1.446 -0.38 0.176 1.66 Total score = 0.657 > 0 , so the instance is a spam.

Prediction = SUM(scores) ADT: Algorithm Steps M features Input sample … Condition 1 Condition 2 Condition 3 Condition M score_1 score_2 score_1 score_2 score_1 score_2 score_1 score_2 Prediction = SUM(scores) Condition 1 Condition 2 Classify AD Tree

On Class Practice 2 Data Method Software Step labor.arff ADTree Parameter (Select by yourself) Software weka Step Explorer->Classify->Classifier (Trees – ADTree)

SVM Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications Gene Expression Data Classification Text Categorization if time permits Discussion

Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1 How would you classify this data? w x + b<0

Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1 How would you classify this data?

Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1 How would you classify this data?

Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1 Any of these would be fine.. ..but which is best?

Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1 How would you classify this data? Misclassified to +1 class

Classifier Margin Classifier Margin f f a a yest yest x x f(x,w,b) = sign(w x + b) f(x,w,b) = sign(w x + b) denotes +1 denotes -1 denotes +1 denotes -1 Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint. Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Maximum Margin f a yest x Maximizing the margin is good according to intuition and PAC theory Implies that only support vectors are important; other training examples are ignorable. Empirically it works very very well. f(x,w,b) = sign(w x + b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM

Linear SVM Mathematically x+ M=Margin Width “Predict Class = +1” zone X- wx+b=1 “Predict Class = -1” zone wx+b=0 wx+b=-1 What we know: w . x+ + b = +1 w . x- + b = -1 w . (x+-x-) = 2

Linear SVM Mathematically Goal: 1) Correctly classify all training data if yi = +1 if yi = -1 for all i 2) Maximize the Margin same as minimize We can formulate a Quadratic Optimization Problem and solve for w and b Minimize subject to

Solving the Optimization Problem Find w and b such that Φ(w) =½ wTw is minimized; and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1 Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them. The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every constraint in the primary problem: Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi

The Optimization Problem Solution The solution has the form: Each non-zero αi indicates that corresponding xi is a support vector. Then the classifying function will have the form: Notice that it relies on an inner product between the test point x and the support vectors xi – we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all pairs of training points. w =Σαiyixi b= yk- wTxk for any xk such that αk 0 f(x) = ΣαiyixiTx + b

Dataset with noise OVERFITTING! Hard Margin: So far we require all data points be classified correctly - No training error What if the training set is noisy? - Solution 1: use very powerful kernels denotes +1 denotes -1 OVERFITTING!

Soft Margin Classification Slack variables ξi can be added to allow misclassification of difficult or noisy examples. What should our quadratic optimization criterion be? Minimize wx+b=1 wx+b=0 wx+b=-1 e7 e11 e2

Hard Margin v.s. Soft Margin The old formulation: The new formulation incorporating slack variables: Parameter C can be viewed as a way to control overfitting. Find w and b such that Φ(w) =½ wTw is minimized and for all {(xi ,yi)} yi (wTxi + b) ≥ 1 Find w and b such that Φ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)} yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i

Linear SVMs: Overview The classifier is a separating hyperplane. Most “important” training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points xi are support vectors with non-zero Lagrangian multipliers αi. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) 0 ≤ αi ≤ C for all αi f(x) = ΣαiyixiTx + b

Non-linear SVMs Datasets that are linearly separable with some noise work out great: But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space: x x x2 x

Non-linear SVMs: Feature spaces General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x)

The “Kernel Trick” The linear classifier relies on dot product between vectors K(xi,xj)=xiTxj If every data point is mapped into high-dimensional space via some transformation Φ: x → φ(x), the dot product becomes: K(xi,xj)= φ(xi) Tφ(xj) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= φ(xi) Tφ(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]

Every semi-positive definite symmetric function is a kernel What Functions are Kernels? For some functions K(xi,xj) checking that K(xi,xj)= φ(xi) Tφ(xj) can be cumbersome. Mercer’s theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix: K(x1,x1) K(x1,x2) K(x1,x3) … K(x1,xN) K(x2,x1) K(x2,x2) K(x2,x3) K(x2,xN) K(xN,x1) K(xN,x2) K(xN,x3) K(xN,xN) K=

Examples of Kernel Functions Linear: K(xi,xj)= xi Txj Polynomial of power p: K(xi,xj)= (1+ xi Txj)p Gaussian (radial-basis function network): Sigmoid: K(xi,xj)= tanh(β0xi Txj + β1)

Non-linear SVMs Mathematically Dual problem formulation: The solution is: Optimization techniques for finding αi’s remain the same! Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi f(x) = ΣαiyiK(xi, xj)+ b

Nonlinear SVM - Overview SVM locates a separating hyperplane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space.

Properties of SVM Flexibility in choosing a similarity function Sparseness of solution when dealing with large data sets - only support vectors are used to specify the separating hyperplane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the feature space Overfitting can be controlled by soft margin approach Nice math property: a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection

SVM Applications SVM has been used successfully in many real-world problems - text (and hypertext) categorization - image classification - bioinformatics (Protein classification, Cancer classification) - hand-written character recognition

Application 1: Cancer Classification High Dimensional - p>1000; n<100 Imbalanced - less positive samples Many irrelevant features Noisy Genes Patients g-1 g-2 …… g-p P-1 p-2 ……. p-n FEATURE SELECTION In the linear case, wi2 gives the ranking of dim i SVM is sensitive to noisy (mis-labeled) data 

Weakness of SVM It is sensitive to noise - A relatively small number of mislabeled examples can dramatically decrease the performance It only considers two classes - how to do multi-class classification with SVM? - Answer: 1) with output arity m, learn m SVM’s SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM m learns “Output==m” vs “Output != m” 2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

Application 2: Text Categorization Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content. - email filtering, web searching, sorting documents by topic, etc.. A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category

Representation of Text IR’s vector space model (aka bag-of-words representation) A doc is represented by a vector indexed by a pre-fixed set or dictionary of terms Values of an entry can be binary or weights Normalization, stop words, word stems Doc x => φ(x)

Text Categorization using SVM The distance between two documents is φ(x)·φ(z) K(x,z) = 〈φ(x)·φ(z) is a valid kernel, SVM can be used with K(x,z) for discrimination. Why SVM? -High dimensional input space -Few irrelevant features (dense concept) -Sparse document vectors (sparse instances) -Text categorization problems are linearly separable

Some Issues Choice of kernel Choice of kernel parameters - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested

Additional Resources An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998. The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley-Interscience; 1998 http://www.kernel-machines.org/