Download presentation
1
Statistical Classification Methods
Introduction k-nearest neighbor Neural networks Decision trees Support Vector Machine
2
What is a Decision Tree? An inductive learning task
Use particular facts to make more generalized conclusions A predictive model based on a branching series of Boolean tests These smaller Boolean tests are less complex than a one-stage classifier Let’s look at a sample decision tree…
3
Predicting Commute Time
Leave At If we leave at 10 AM and there are no cars stalled on the road, what will our commute time be? 10 AM 9 AM 8 AM Stall? Accident? Long No Yes No Yes Short Long Medium Long
4
Inductive Learning In this decision tree, we made a series of Boolean decisions and followed the corresponding branch Did we leave at 10 AM? Did a car stall on the road? Is there an accident on the road? By answering each of these yes/no questions, we then came to a conclusion on how long our commute might take
5
Decision Trees as Rules
We did not have represent this tree graphically We could have represented as a set of rules. However, this may be much harder to read…
6
Decision Tree as a Rule Set
if hour == 8am commute time = long else if hour == 9am if accident == yes else commute time = medium else if hour == 10am if stall == yes commute time = short Notice that all attributes to not have to be used in each path of the decision. As we will see, all attributes may not even appear in the tree.
7
How to Create a Decision Tree
We first make a list of attributes that we can measure These attributes (for now) must be discrete We then choose a target attribute that we want to predict Then create an experience table that lists what we have seen in the past
8
Sample Experience Table
Example Attributes Target Hour Weather Accident Stall Commute D1 8 AM Sunny No Long D2 Cloudy Yes D3 10 AM Short D4 9 AM Rainy D5 D6 D7 D8 Medium D9 D10 D11 D12 D13
9
Choosing Attributes The previous experience decision table showed 4 attributes: hour, weather, accident and stall But the decision tree only showed 3 attributes: hour, accident and stall Why is that?
10
Choosing Attributes Methods for selecting attributes (which will be described later) show that weather is not a discriminating attribute We use the principle of Occam’s Razor: Given a number of competing hypotheses, the simplest one is preferable
11
Choosing Attributes The basic structure of creating a decision tree is the same for most decision tree algorithms The difference lies in how we select the attributes for the tree We will focus on the ID3 algorithm developed by Ross Quinlan in 1975
12
Decision Tree Algorithms
The basic idea behind any decision tree algorithm is as follows: Choose the best attribute(s) to split the remaining instances and make that attribute a decision node Repeat this process for recursively for each child Stop when: All the instances have the same target attribute value There are no more attributes There are no more instances
13
Identifying the Best Attributes
Refer back to our original decision tree Leave At 9 AM 10 AM 8 AM Accident? Stall? Long Yes No Yes No Short Long Medium Long How did we know to split on leave at and then on stall and accident and not weather?
14
ID3 Heuristic To determine the best attribute, we look at the ID3 heuristic ID3 splits attributes based on their entropy. Entropy is the measure of disinformation…
15
Entropy Entropy is minimized when all values of the target attribute are the same. If we know that commute time will always be short, then entropy = 0 Entropy is maximized when there is an equal chance of all values for the target attribute (i.e. the result is random) If commute time = short in 3 instances, medium in 3 instances and long in 3 instances, entropy is maximized
16
Entropy Calculation of entropy
Entropy(S) = ∑(i=1 to l)-|Si|/|S| * log2(|Si|/|S|) S = set of examples Si = subset of S with value vi under the target attribute l = size of the range of the target attribute
17
ID3 ID3 splits on attributes with the lowest entropy
We calculate the entropy for all values of an attribute as the weighted sum of subset entropies as follows: ∑(i = 1 to k) |Si|/|S| Entropy(Si), where k is the range of the attribute we are testing We can also measure information gain (which is inversely proportional to entropy) as follows: Entropy(S) - ∑(i = 1 to k) |Si|/|S| Entropy(Si)
18
ID3 Given our commute time sample set, we can calculate the entropy of each attribute at the root node Attribute Expected Entropy Information Gain Hour 0.6511 Weather Accident Stall
19
Decision Trees: Random Forest Classifier
Random forests (RF) are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. growing an ensemble of trees and letting them vote for the most popular class. Decision trees
20
Random Forest- Example
Training Data M features N examples As the classifier of choice we adopt Random Forest Classifier, due to its robustness to heterogenous and noisy feature. Random Forest is an ensembe classifier. Briefly, given N data and M features, bootsrap samples are created from the traning data Decision trees
21
Random Forest Classifier
Create bootstrap samples from the training data M features N examples ....… Decision trees
22
Random Forest Classifier
Construct a decision tree M features N examples From eachd descision tree .. In spliting the nodes, the Gini Gain is employed ....… Decision trees
23
Random Forest Classifier
At each node in choosing the split feature choose only among m<M features M features N examples Different from the regular decision trees in random forest, when the splitting feature is chosen from only a subset of allf eatures. The robostness of the classifier arises from bootsraping of the training data and the random selection of features, the the random choose of features. ....… Decision trees
24
Random Forest Classifier
Create decision tree from each bootstrap sample N examples ....… M features Decision trees
25
Random Forest Classifier
M features Take he majority vote N examples Finally the given an input data the decision is made as follows given an input data, ....… ....… Decision trees
26
Random Forest: Algorithm Steps
Training Samples N samples M features Extract m from M Randomly choose a training set for this tree by estimated the prediction error of the tree Randomly choose m Calculate the best split based on m in the training set Each tree is fully grown and not pruned A test sample RF Model (Many Decision Trees) Class label To create training model (RF), many decision trees are needed, for each decision tree, we do following steps: Decision trees
27
On Class Practice 1 Data Method Software Step Iris.arff RF
Parameter (Select by yourself) Software weka Step Explorer->Classify->Classifier (Trees – RandomForest )
28
Alternating Decision Tree
Two Components: Decision nodes - specify a predicate condition. Prediction nodes - contain a single number. Note: Only for 2 class
29
ADT: Example Is that a spam email? Prediction > 0 Spam
Prediction < Normal Decision node Prediction node AD Tree
30
ADT: Example Total score = 0.657 > 0 , so the instance is a spam.
An instance to be classified Feature Value char_freq_bang 0.08 word_freq_hp 0.4 capital_run_length_longest 4 char_freq_dollar word_freq_remove 0.9 word_freq_george Other features ... Score for the above instance Iteration 1 2 3 4 5 6 Instance values N/A .08 < .052 = f .4 < .195 = f 0 < .01 = t 0 < = t .9 < .225 = f Prediction -0.093 0.74 -1.446 -0.38 0.176 1.66 Total score = > 0 , so the instance is a spam.
31
Prediction = SUM(scores)
ADT: Algorithm Steps M features Input sample … Condition 1 Condition 2 Condition 3 Condition M score_1 score_2 score_1 score_2 score_1 score_2 score_1 score_2 Prediction = SUM(scores) Condition 1 Condition 2 Classify AD Tree
32
On Class Practice 2 Data Method Software Step labor.arff ADTree
Parameter (Select by yourself) Software weka Step Explorer->Classify->Classifier (Trees – ADTree)
33
SVM Overview Intro. to Support Vector Machines (SVM) Properties of SVM
Applications Gene Expression Data Classification Text Categorization if time permits Discussion
34
Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1
How would you classify this data? w x + b<0
35
Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1
How would you classify this data?
36
Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1
How would you classify this data?
37
Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1
Any of these would be fine.. ..but which is best?
38
Linear Classifiers f a yest x f(x,w,b) = sign(w x + b) denotes +1
How would you classify this data? Misclassified to +1 class
39
Classifier Margin Classifier Margin f f a a yest yest x x
f(x,w,b) = sign(w x + b) f(x,w,b) = sign(w x + b) denotes +1 denotes -1 denotes +1 denotes -1 Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint. Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.
40
Maximum Margin f a yest x
Maximizing the margin is good according to intuition and PAC theory Implies that only support vectors are important; other training examples are ignorable. Empirically it works very very well. f(x,w,b) = sign(w x + b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM
41
Linear SVM Mathematically
x+ M=Margin Width “Predict Class = +1” zone X- wx+b=1 “Predict Class = -1” zone wx+b=0 wx+b=-1 What we know: w . x+ + b = +1 w . x- + b = -1 w . (x+-x-) = 2
42
Linear SVM Mathematically
Goal: 1) Correctly classify all training data if yi = +1 if yi = -1 for all i 2) Maximize the Margin same as minimize We can formulate a Quadratic Optimization Problem and solve for w and b Minimize subject to
43
Solving the Optimization Problem
Find w and b such that Φ(w) =½ wTw is minimized; and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1 Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them. The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every constraint in the primary problem: Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi
44
The Optimization Problem Solution
The solution has the form: Each non-zero αi indicates that corresponding xi is a support vector. Then the classifying function will have the form: Notice that it relies on an inner product between the test point x and the support vectors xi – we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all pairs of training points. w =Σαiyixi b= yk- wTxk for any xk such that αk 0 f(x) = ΣαiyixiTx + b
45
Dataset with noise OVERFITTING!
Hard Margin: So far we require all data points be classified correctly - No training error What if the training set is noisy? - Solution 1: use very powerful kernels denotes +1 denotes -1 OVERFITTING!
46
Soft Margin Classification
Slack variables ξi can be added to allow misclassification of difficult or noisy examples. What should our quadratic optimization criterion be? Minimize wx+b=1 wx+b=0 wx+b=-1 e7 e11 e2
47
Hard Margin v.s. Soft Margin
The old formulation: The new formulation incorporating slack variables: Parameter C can be viewed as a way to control overfitting. Find w and b such that Φ(w) =½ wTw is minimized and for all {(xi ,yi)} yi (wTxi + b) ≥ 1 Find w and b such that Φ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)} yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i
48
Linear SVMs: Overview The classifier is a separating hyperplane.
Most “important” training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points xi are support vectors with non-zero Lagrangian multipliers αi. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) 0 ≤ αi ≤ C for all αi f(x) = ΣαiyixiTx + b
49
Non-linear SVMs Datasets that are linearly separable with some noise work out great: But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space: x x x2 x
50
Non-linear SVMs: Feature spaces
General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x)
51
The “Kernel Trick” The linear classifier relies on dot product between vectors K(xi,xj)=xiTxj If every data point is mapped into high-dimensional space via some transformation Φ: x → φ(x), the dot product becomes: K(xi,xj)= φ(xi) Tφ(xj) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= φ(xi) Tφ(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]
52
Every semi-positive definite symmetric function is a kernel
What Functions are Kernels? For some functions K(xi,xj) checking that K(xi,xj)= φ(xi) Tφ(xj) can be cumbersome. Mercer’s theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix: K(x1,x1) K(x1,x2) K(x1,x3) … K(x1,xN) K(x2,x1) K(x2,x2) K(x2,x3) K(x2,xN) K(xN,x1) K(xN,x2) K(xN,x3) K(xN,xN) K=
53
Examples of Kernel Functions
Linear: K(xi,xj)= xi Txj Polynomial of power p: K(xi,xj)= (1+ xi Txj)p Gaussian (radial-basis function network): Sigmoid: K(xi,xj)= tanh(β0xi Txj + β1)
54
Non-linear SVMs Mathematically
Dual problem formulation: The solution is: Optimization techniques for finding αi’s remain the same! Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi f(x) = ΣαiyiK(xi, xj)+ b
55
Nonlinear SVM - Overview
SVM locates a separating hyperplane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space.
56
Properties of SVM Flexibility in choosing a similarity function
Sparseness of solution when dealing with large data sets - only support vectors are used to specify the separating hyperplane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the feature space Overfitting can be controlled by soft margin approach Nice math property: a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection
57
SVM Applications SVM has been used successfully in many real-world problems - text (and hypertext) categorization - image classification - bioinformatics (Protein classification, Cancer classification) - hand-written character recognition
58
Application 1: Cancer Classification
High Dimensional - p>1000; n<100 Imbalanced - less positive samples Many irrelevant features Noisy Genes Patients g-1 g-2 …… g-p P-1 p-2 ……. p-n FEATURE SELECTION In the linear case, wi2 gives the ranking of dim i SVM is sensitive to noisy (mis-labeled) data
59
Weakness of SVM It is sensitive to noise - A relatively small number of mislabeled examples can dramatically decrease the performance It only considers two classes - how to do multi-class classification with SVM? - Answer: 1) with output arity m, learn m SVM’s SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM m learns “Output==m” vs “Output != m” 2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.
60
Application 2: Text Categorization
Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content. - filtering, web searching, sorting documents by topic, etc.. A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category
61
Representation of Text
IR’s vector space model (aka bag-of-words representation) A doc is represented by a vector indexed by a pre-fixed set or dictionary of terms Values of an entry can be binary or weights Normalization, stop words, word stems Doc x => φ(x)
62
Text Categorization using SVM
The distance between two documents is φ(x)·φ(z) K(x,z) = 〈φ(x)·φ(z) is a valid kernel, SVM can be used with K(x,z) for discrimination. Why SVM? -High dimensional input space -Few irrelevant features (dense concept) -Sparse document vectors (sparse instances) -Text categorization problems are linearly separable
63
Some Issues Choice of kernel Choice of kernel parameters
- Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested
64
Additional Resources An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2): , 1998. The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley-Interscience; 1998
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.