Download presentation
Presentation is loading. Please wait.
Published byErika Collins Modified over 9 years ago
1
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute
2
Classification Scheme Labeled training examples Classificatio n algorithm Classificatio n rule Test example Predicted classification
3
Building a Good Classifier Need enough training examples Good performance on training set Classifier is not too complex Measures of complexity: Number of bits needed to write classifier Number of parameters VC dimension
4
Example
6
Classification Algorithms Nearest neighbors Naïve Bayes Decision trees Boosting Neural networks SVMs Bagging …
7
Nearest Neighbor Classification Popular nonlinear classifier Find k nearest neighbors of the unknown (test) vector from the training vectors Assign the unknown (test) vector to the most frequent class from the k nearest neighboring vectors Question: how to select similarity measure between vectors? Problem: in high-dimensional data, nearest neighbors are still not ‘near’
8
Naïve Bayes Classifier Input of labeled examples m 1 ={c, x 1,...,x n } m 2 ={c, x 1,...,x n } … Parameter estimation / learning phase Prediction Compute assignment to classes and choose most likely class C X1X1 XnXn X2X2 … Naïve Bayes
9
Decision Trees
10
Decision Trees Example
11
Building Decision Trees Choose a rule to split on Divide data to disjoint subsets based on splitting rule Repeat recursively for each subset Stop when leaves are (almost) “pure”
12
Choosing the Splitting Rule Choose rule that leads to greatest increase in “purity” Purity measures Entropy: -(p + ln(p + ) + p - ln(p - )) Gini index: p + p -
13
Tree Size vs. Prediction Accuracy Trees must be big enough to fit training data, but not too big to overfit (capture noise or spurious patterns)
14
Decision Tree Summary Best known packages C4.5 (Quinlan) CART (Breiman, Friedman, Olshen & Stone) Very fast to train and evaluate Relatively easy to interpret But: accuracy is often not state-of-the-art Work well within boosting approaches
15
Boosting Main observation: easy to find simple ‘rules of thumb’ that are ‘often’ correct General approach Concentrate on “hard examples” Derive ‘rule of thumb’ for these examples Combine rule with previous rules by taking a weighted majority of all current rules Repeat T times Boosting guarantees: given sufficient data, and an algorithm that can consistently find classifiers (‘rules of thumb’) slightly better than random, a high accuracy classifier can be built
16
AdaBoost
17
Setting α Error ε t : Setting α t : Classifier weight inversely proportional to classifier error, i.e., classifier weight increases with classification accuracy
18
Toy Example Weak classifiers: vertical or horizontal half-planes
19
Round 1
20
Round 2
21
Round 3
22
Final Boosting Classifier
23
Test Error Behavior Expected Typical
24
The Margins Explanation Training error measures classification accuracy, but confidence of classifications is also important Recall: H final is weighted majority vote of weak rules Measure confidence by margin = vote strength Empirical evidence and mathematical proof that: Large margins better generalization error Boosting tends to increase margins of training examples
25
Boosting Summary Relatively fast (but not like other algorithms) Simple and easy to program Flexible: can combine with any learning algorithm e.g., C4.5, very simple rules of thumb Provable guarantees State-of-the-art accuracy Tends not to overfit (but does sometimes) Many applications
26
Basic unit: perceptron (linear threshold function) Neural network Perceptrons in a network Weight on edges Each unit: perceptron Neural Networks
27
Perceptron Units Problem: network computation is discontinuous due to g(x) Solution: approximate g(x) with smoothed threshold function e.g., g(x)=1/(1+exp(-x)) H w (x) is now continuous and differentiable in both x and w
28
Finding Weights
29
Neural Network Summary Slow to converge Difficult to get right network architecture and parameters Not start-of-the-art accuracy as general method Can be tuned to specific applications and achieve good performance then
30
Support Vector Machines (SVMs) Given linearly separable data Choose hyperplane that maximizes minimum margin (=distance to separating hyperplane) Intuition: separate +’s from –’s as much as possible
31
Finding Max-Margin Hyperplane
32
Non-Linearly Separable Data Penalize each point by distance from the margin 1, i.e., minimize: Map data into high-dimensional space in which data becomes linearly separable
33
SVM Summary Fast algorithms available Not simple to program State-of-the-art accuracy Theoretical justification Many applications
34
Assignment Classify tissue samples into various clinical states (e.g., tumor/normal) based on microarray profiles Classifiers to compare: Naïve Bayes Naïve Bayes + feature selection Boosting with decision tree stumps (weak classifiers are decision tree with one split)
35
Assignment Data: Breast cancer dataset 295 samples data.tab: genes on rows, first column is gene identifier (ID), columns 2-296 are expression level of gene in each array 27 clinical attributes experiment_attributes.tab: attributes on columns, samples on rows, typically 0/1 indicates association of sample and attribute Evaluation: use 10-fold cross validation scheme and compute prediction accuracy for clinical attributes: met_in_5_yr_ eventmet_ Alive_8yr_ eventdea_ GradeIII_
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.