Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification Slides from Heng Ji Classification Define classes/categories Define classes/categories Label text Label text Extract features Extract.

Similar presentations


Presentation on theme: "Classification Slides from Heng Ji Classification Define classes/categories Define classes/categories Label text Label text Extract features Extract."— Presentation transcript:

1

2 Classification Slides from Heng Ji

3 Classification Define classes/categories Define classes/categories Label text Label text Extract features Extract features Choose a classifier Choose a classifier  Naive Bayes Classifier  Decision Trees  Maximum Entropy  … Train it Train it Use it to classify new examples Use it to classify new examples

4 Na ï ve Bayes More powerful that Decision Trees Decision Trees Naïve Bayes Every feature gets a say in determining which label should be assigned to a given input value.

5 Na ï ve Bayes: Strengths Very simple model Very simple model  Easy to understand  Very easy to implement Can scale easily to millions of training examples (just need counts!) Can scale easily to millions of training examples (just need counts!) Very efficient, fast training and classification Very efficient, fast training and classification Modest space storage Modest space storage Widely used because it works really well for text categorization Widely used because it works really well for text categorization Linear, but non parallel decision boundaries Linear, but non parallel decision boundaries

6 Na ï ve Bayes: weaknesses Naïve Bayes independence assumption has two consequences: Naïve Bayes independence assumption has two consequences:  The linear ordering of words is ignored (bag of words model)  The words are independent of each other given the class: President is more likely to occur in a context that contains election than in a context that contains poet Naïve Bayes assumption is inappropriate if there are strong conditional dependencies between the variables Naïve Bayes assumption is inappropriate if there are strong conditional dependencies between the variables Nonetheless, Naïve Bayes models do well in a surprisingly large number of cases because often we are interested in classification accuracy and not in accurate probability estimations) Nonetheless, Naïve Bayes models do well in a surprisingly large number of cases because often we are interested in classification accuracy and not in accurate probability estimations) Does not optimize prediction accuracy Does not optimize prediction accuracy

7 The naivete of independence Naïve Bayes assumption is inappropriate if there are strong conditional dependencies between the variables Naïve Bayes assumption is inappropriate if there are strong conditional dependencies between the variables Classifier may end up "double-counting" the effect of highly correlated features, pushing the classifier closer to a given label than is justified Classifier may end up "double-counting" the effect of highly correlated features, pushing the classifier closer to a given label than is justified Consider a name gender classifier Consider a name gender classifier  features ends-with(a) and ends-with(vowel) are dependent on one another, because if an input value has the first feature, then it must also have the second feature  For features like these, the duplicated information may be given more weight than is justified by the training set

8 Decision Trees: Strengths capable to generate understandable rules capable to generate understandable rules perform classification without requiring much computation perform classification without requiring much computation capable to handle both continuous and categorical variables capable to handle both continuous and categorical variables provide a clear indication of which features are most important for prediction or classification. provide a clear indication of which features are most important for prediction or classification.

9 Decision Trees: weaknesses prone to errors in classification problems with many classes and relatively small number of training examples. prone to errors in classification problems with many classes and relatively small number of training examples.  Since each branch in the decision tree splits the training data, the amount of training data available to train nodes lower in the tree can become quite small. can be computationally expensive to train. can be computationally expensive to train.  Need to compare all possible splits  Pruning is also expensive

10 Decision Trees: weaknesses Typically examine one field at a time Typically examine one field at a time Leads to rectangular classification boxes that may not correspond well with the actual distribution of records in the decision space. Leads to rectangular classification boxes that may not correspond well with the actual distribution of records in the decision space.  Such ordering limits their ability to exploit features that are relatively independent of one another  Naive Bayes overcomes this limitation by allowing all features to act "in parallel"

11 Linearly separable data Class1 Class2 Linear Decision boundary

12 Non linearly separable data Class1 Class2

13 Non linearly separable data Non Linear Classifier Class1 Class2

14 Linear versus Non Linear algorithms Linear or Non linear separable data? Linear or Non linear separable data?  We can find out only empirically Linear algorithms (algorithms that find a linear decision boundary) Linear algorithms (algorithms that find a linear decision boundary)  When we think the data is linearly separable  Advantages Simpler, less parameters  Disadvantages High dimensional data (like for NLP) is usually not linearly separable  Examples: Perceptron, Winnow, large margin  Note: we can use linear algorithms also for non linear problems (see Kernel methods)

15 Linear versus Non Linear algorithms Non Linear algorithms Non Linear algorithms  When the data is non linearly separable  Advantages More accurate  Disadvantages More complicated, more parameters  Example: Kernel methods  Note: the distinction between linear and non linear applies also for multi-class classification (we’ll see this later)

16 Simple linear algorithms Perceptron algorithm Perceptron algorithm  Linear  Binary classification  Online (process data sequentially, one data point at the time)  Mistake driven  Simple single layer Neural Networks

17 Linear Algebra w b wx + b > 0 wx + b < 0

18 Linear binary classification Data: {(x i, y i )} i=1…n x in R d ( x is a vector in d-dimensional space)  feature vector y in {-1,+1}  label (class, category) Question: Design a linear decision boundary: wx + b (equation of hyperplane) such that the classification rule associated with it has minimal probability of error classification rule: y = sign(w x + b) which means: if wx + b > 0 then y = +1 if wx + b < 0 then y = -1 Gert Lanckriet, Statistical Learning Theory Tutorial

19 Linear binary classification Find a good hyperplane Find a good hyperplane (w, b) in R d+1 (w, b) in R d+1 that correctly classifies data points as much as possible that correctly classifies data points as much as possible In online fashion: one data point at the time, update weights as necessary In online fashion: one data point at the time, update weights as necessary wx + b = 0 Classification Rule: y = sign(wx + b) From Gert Lanckriet, Statistical Learning Theory Tutorial

20 Perceptron

21 Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen) input pattern that is similar to an old (seen) input pattern is likely to be classified correctly

22 Learning Rule, Ctd Basic Idea – go over all existing data patterns, whose labeling is known, and check their classification with a current weight vector Basic Idea – go over all existing data patterns, whose labeling is known, and check their classification with a current weight vector If correct, continue If correct, continue If not, add to the weights a quantity that is proportional to the product of the input pattern with the desired output Z (1 or –1) If not, add to the weights a quantity that is proportional to the product of the input pattern with the desired output Z (1 or –1)

23 Weight Update Rule W j+1 = W j +  Z j X j j = 0, …, n  = learning rate

24 Hebb Rule In 1949, Hebb postulated that the changes in a synapse are proportional to the correlation between firing of the neurons that are connected through the synapse (the pre- and post- synaptic neurons) In 1949, Hebb postulated that the changes in a synapse are proportional to the correlation between firing of the neurons that are connected through the synapse (the pre- and post- synaptic neurons) Neurons that fire together, wire together Neurons that fire together, wire together

25 Example: a simple problem 4 points linearly separable 4 points linearly separable -2-1.5-0.500.511.52 -2 -1.5 -0.5 0 0.5 1 1.5 2 Z = 1 Z = - 1 (1/2, 1) (1,1/2) (-1,1/2) (-1,1)

26 -2-1.5-0.500.511.52 -2 -1.5 -0.5 0 0.5 1 1.5 2 (1/2, 1) (1,1/2) (-1,1/2) (-1,1) W 0 = (0, 1) Initial Weights

27 Updating Weights Upper left point is wrongly classified Upper left point is wrongly classified  = 1/3, W 0 = (0, 1) W 1  W 0 +  Z  X 1 W 1 = (0, 1) + 1/3  -1  (-1, 1/2) = (1/3, 5/6)

28 -2 -1.5 -0.5 0 0.5 1 1.5 2 W 1 = (1/3,5/6) First Correction

29 Updating Weights, Ctd Upper left point is still wrongly classified Upper left point is still wrongly classified W 2  W 1 +  Z  X 1 W 2 = (1/3, 5/6) + 1/3  -1  (-1, 1/2) = (2/3, 2/3)

30 Second Correction -2 -1.5 -0.5 0 0.5 1 1.5 2 W 2 = (2/3,2/3)

31 Example, Ctd All 4 points are classified correctly All 4 points are classified correctly Toy problem – only 2 updates required Toy problem – only 2 updates required Correction of weights was simply a Correction of weights was simply a rotation of the separating hyper plane Rotation can be applied to the right direction, but may require many updates Rotation can be applied to the right direction, but may require many updates

32 Support Vector Machines

33 Large margin classifier Another family of linear algorithms Another family of linear algorithms Intuition (Vapnik, 1965) Intuition (Vapnik, 1965) If the classes are linearly separable: If the classes are linearly separable:  Separate the data  Place hyper-plane “ far ” from the data: large margin  Statistical results guarantee good generalization BAD Gert Lanckriet, Statistical Learning Theory Tutorial

34 GOOD  Maximal Margin Classifier Large margin classifier Intuition (Vapnik, 1965) if linearly separable: Intuition (Vapnik, 1965) if linearly separable:  Separate the data  Place hyperplane “ far ” from the data: large margin  Statistical results guarantee good generalization Gert Lanckriet, Statistical Learning Theory Tutorial

35 Large margin classifier If not linearly separable If not linearly separable  Allow some errors  Still, try to place hyperplane “ far ” from each class Gert Lanckriet, Statistical Learning Theory Tutorial

36 Large Margin Classifiers Advantages Advantages  Theoretically better (better error bounds) Limitations Limitations  Computationally more expensive, large quadratic programming

37 Non Linear problem

38

39 Kernel methods Kernel methods A family of non-linear algorithms A family of non-linear algorithms Transform the non linear problem in a linear one (in a different feature space) Transform the non linear problem in a linear one (in a different feature space) Use linear algorithms to solve the linear problem in the new space Use linear algorithms to solve the linear problem in the new space Gert Lanckriet, Statistical Learning Theory Tutorial

40 X=[x z] Basic principle kernel methods  : R d  R D (D >> d)  (X)=[x 2 z 2 xz] f(x) = sign(w 1 x 2 +w 2 z 2 +w 3 xz +b) w T (x)+b=0 Gert Lanckriet, Statistical Learning Theory Tutorial

41 Basic principle kernel methods Linear separability: more likely in high dimensions Linear separability: more likely in high dimensions Mapping:  maps input into high-dimensional feature space Mapping:  maps input into high-dimensional feature space Classifier: construct linear classifier in high- dimensional feature space Classifier: construct linear classifier in high- dimensional feature space Motivation: appropriate choice of  leads to linear separability Motivation: appropriate choice of  leads to linear separability We can do this efficiently! We can do this efficiently! Gert Lanckriet, Statistical Learning Theory Tutorial

42 Basic principle kernel methods We can use the linear algorithms seen before (for example, perceptron) for classification in the higher dimensional space We can use the linear algorithms seen before (for example, perceptron) for classification in the higher dimensional space

43 Multi-class classification Given: some data items that belong to one of M possible classes Given: some data items that belong to one of M possible classes Task: Train the classifier and predict the class for a new data item Task: Train the classifier and predict the class for a new data item Geometrically: harder problem, no more simple geometry Geometrically: harder problem, no more simple geometry

44 Multi-class classification

45 Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) How would you classify this data? w x + b=0 w x + b<0 w x + b>0

46 Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) How would you classify this data?

47 Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) How would you classify this data?

48 Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) Any of these would be fine....but which is best?

49 Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) How would you classify this data? Misclassified to +1 class

50 f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint. Classifier Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

51 Maximum Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w x + b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Linear SVM Support Vectors are those datapoints that the margin pushes up against 1.Maximizing the margin is good according to intuition and PAC theory 2.Implies that only support vectors are important; other training examples are ignorable. 3.Empirically it works very very well.

52 Let me digress to…what is PAC Theory? Two important aspects of complexity in machine learning. Two important aspects of complexity in machine learning. First, sample complexity: in many learning problems, training data is expensive and we should hope not to need too much of it. First, sample complexity: in many learning problems, training data is expensive and we should hope not to need too much of it. Secondly, computational complexity: A neural network, for example, which takes an hour to train may be of no practical use in complex financial prediction problems. Secondly, computational complexity: A neural network, for example, which takes an hour to train may be of no practical use in complex financial prediction problems. Important that both the amount of training data required for a prescribed level of performance and the running time of the learning algorithm in learning from this data do not increase too dramatically as the “difficulty” of the learning problem increases. Important that both the amount of training data required for a prescribed level of performance and the running time of the learning algorithm in learning from this data do not increase too dramatically as the “difficulty” of the learning problem increases.

53 Such issues have been formalised and investigated over the past decade within the field of “computational learning theory”. Such issues have been formalised and investigated over the past decade within the field of “computational learning theory”. One popular framework for discussing such problems is the probabilistic framework which has become known as the “probably approximately correct”, or PAC, model of learning. One popular framework for discussing such problems is the probabilistic framework which has become known as the “probably approximately correct”, or PAC, model of learning. Let me digress to…what is PAC Theory?

54 Linear SVM Mathematically What we know: w. x + + b = +1 w. x + + b = +1 w. x - + b = -1 w. x - + b = -1 w. (x + -x - ) = 2 w. (x + -x - ) = 2 “Predict Class = +1” zone “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 X-X- x+x+ M=Margin Width

55 Linear SVM Mathematically Goal: 1) Correctly classify all training data if y i = +1 if y i = -1 for all i 2) Maximize the Margin same as minimize We can formulate a Quadratic Optimization Problem and solve for w and b Minimize subject to

56 Solving the Optimization Problem Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them. The solution involves constructing a dual problem where a Lagrange multiplier α i is associated with every constraint in the primary problem: Find w and b such that Φ(w) =½ w T w is minimized; and for all { ( x i,y i )} : y i (w T x i + b) ≥ 1 Find α 1 …α N such that Q( α ) = Σ α i - ½ ΣΣ α i α j y i y j x i T x j is maximized and (1) Σ α i y i = 0 (2) α i ≥ 0 for all α i

57 A digression… Lagrange Multipliers In mathematical optimization, the method of Lagrange multipliers provides a strategy for finding the maxima and minima of a function subject to constraints. In mathematical optimization, the method of Lagrange multipliers provides a strategy for finding the maxima and minima of a function subject to constraints. For instance, consider the optimization problem For instance, consider the optimization problemmaximize subject to subject to We introduce a new variable (λ) called a Lagrange multiplier, and study the Lagrange function defined by We introduce a new variable (λ) called a Lagrange multiplier, and study the Lagrange function defined by (the λ term may be either added or subtracted.) If (x,y) is a maximum for the original constrained problem, then there exists a λ such that (x,y,λ) is a stationary point for the Lagrange function If (x,y) is a maximum for the original constrained problem, then there exists a λ such that (x,y,λ) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of Λ are zero). (stationary points are those points where the partial derivatives of Λ are zero).

58 The Optimization Problem Solution The solution has the form: Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function will have the form: Notice that it relies on an inner product between the test point x and the support vectors x i Also keep in mind that solving the optimization problem involved computing the inner products x i T x j between all pairs of training points. w = Σ α i y i x i b= y k - w T x k for any x k such that α k  0 f(x) = Σ α i y i x i T x + b

59 Dataset with noise Hard Margin: So far we require all data points be classified correctly - No training error What if the training set is noisy? - Solution 1: use very powerful kernels denotes +1 denotes -1 OVERFITTING!

60 Slack variables ξi can be added to allow misclassification of difficult or noisy examples. wx+b=1 wx+b=0 wx+b=-1 77  11 22 Soft Margin Classification What should our quadratic optimization criterion be? Minimize

61 Hard Margin v.s. Soft Margin The old formulation: The new formulation incorporating slack variables: Parameter C can be viewed as a way to control overfitting. Find w and b such that Φ(w) =½ w T w is minimized and for all { ( x i,y i )} y i (w T x i + b) ≥ 1 Find w and b such that Φ(w) =½ w T w + C Σ ξ i is minimized and for all { ( x i,y i )} y i (w T x i + b) ≥ 1- ξ i and ξ i ≥ 0 for all i

62 Linear SVMs: Overview The classifier is a separating hyperplane. Most “ important ” training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points x i are support vectors with non-zero Lagrangian multipliers α i. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find α 1 …α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1) Σα i y i = 0 (2) 0 ≤ α i ≤ C for all α i f(x) = Σ α i y i x i T x + b

63 Non-linear SVMs Datasets that are linearly separable with some noise work out great: But what are we going to do if the dataset is just too hard? How about … mapping data to a higher-dimensional space: 0 x 0 x 0 x x2x2

64 Non-linear SVMs: Feature spaces General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x)

65 The “Kernel Trick” The linear classifier relies on dot product between vectors K(x i,x j )=x i T x j If every data point is mapped into high-dimensional space via some transformation Φ: x → φ(x), the dot product becomes: K(x i,x j )= φ(x i ) T φ(x j ) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x i T x j ) 2, Need to show that K(x i,x j )= φ(x i ) T φ(x j ): K(x i,x j )=(1 + x i T x j ) 2, = 1+ x i1 2 x j1 2 + 2 x i1 x j1 x i2 x j2 + x i2 2 x j2 2 + 2x i1 x j1 + 2x i2 x j2 = [1 x i1 2 √2 x i1 x i2 x i2 2 √2x i1 √2x i2 ] T [1 x j1 2 √2 x j1 x j2 x j2 2 √2x j1 √2x j2 ] = φ(x i ) T φ(x j ), where φ(x) = [1 x 1 2 √2 x 1 x 2 x 2 2 √2x 1 √2x 2 ]

66 What Functions are Kernels? For some functions K(x i,x j ) checking that K(x i,x j )= φ(x i ) T φ(x j ) can be cumbersome. Mercer’s theorem: Every semi-positive definite symmetric function is a kernel

67 Examples of Kernel Functions Linear: K(x i,x j )= x i T x j Polynomial of power p: K(x i,x j )= (1+ x i T x j ) p Gaussian (radial-basis function network): Sigmoid: K(x i,x j )= tanh(β 0 x i T x j + β 1 )

68 Non-linear SVMs Mathematically Dual problem formulation: The solution is: Optimization techniques for finding α i ’s remain the same! Find α 1 …α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j K(x i, x j ) is maximized and (1) Σα i y i = 0 (2) α i ≥ 0 for all α i f(x) = Σα i y i K(x i, x j )+ b

69 SVM locates a separating hyperplane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space. Nonlinear SVM - Overview

70 Properties of SVM Flexibility in choosing a similarity function Flexibility in choosing a similarity function Sparseness of solution when dealing with large data sets Sparseness of solution when dealing with large data sets  only support vectors are used to specify the separating hyperplane Ability to handle large feature spaces Ability to handle large feature spaces  complexity does not depend on the dimensionality of the feature space Overfitting can be controlled by soft margin approach Overfitting can be controlled by soft margin approach Nice math property: Nice math property:  a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection Feature Selection

71 SVM Applications SVM has been used successfully in many real-world problems SVM has been used successfully in many real-world problems - text (and hypertext) categorization - text (and hypertext) categorization - image classification – different types of sub- problems - image classification – different types of sub- problems - bioinformatics (Protein classification, - bioinformatics (Protein classification, Cancer classification) Cancer classification) - hand-written character recognition - hand-written character recognition

72 Application 1: Cancer Classification High Dimensional High Dimensional - p>1000; n 1000; n<100 Imbalanced Imbalanced - less positive samples - less positive samples Many irrelevant features Many irrelevant features Noisy Noisy Genes Patientsg-1g-2 …… g-p P-1 p-2 ……. p-np-n FEATURE SELECTION In the linear case, w i 2 gives the ranking of dim i SVM is sensitive to noisy (mis-labeled) data 

73 Weakness of SVM It is sensitive to noise It is sensitive to noise  A relatively small number of mislabeled examples can dramatically decrease the performance It only considers two classes It only considers two classes  how to do multi-class classification with SVM?  Answer: 1.with output arity m, learn m SVM’s SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM m learns “Output==m” vs “Output != m” 2.to predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

74 Application: Text Categorization Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content. Task: The classification of natural text (or hypertext) documents into a fixed number of predefined categories based on their content. - email filtering, web searching, sorting documents by topic, etc.. - email filtering, web searching, sorting documents by topic, etc.. A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category A document can be assigned to more than one category, so this can be viewed as a series of binary classification problems, one for each category

75 Application : Face Expression Recognition Construct feature space, by use of eigenvectors or other means Construct feature space, by use of eigenvectors or other means Multiple class problem, several expressions Multiple class problem, several expressions Use multi-class SVM Use multi-class SVM

76 Some Issues Choice of kernel Choice of kernel - Gaussian or polynomial kernel is default - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - if ineffective, more elaborate kernels are needed Choice of kernel parameters Choice of kernel parameters - e.g. σ in Gaussian kernel - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested - a lengthy series of experiments in which various parameters are tested

77 Additional Resources LibSVM LibSVM An excellent tutorial on VC-dimension and Support Vector Machines: An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998. The VC/SRM/SVM Bible: The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 http://www.kernel-machines.org/

78 Reference Support Vector Machine Classification of Microarray Gene Expression Data, Michael P. S. Brown William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, Jr., David Haussler Support Vector Machine Classification of Microarray Gene Expression Data, Michael P. S. Brown William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, Jr., David Haussler www.cs.utexas.edu/users/mooney/cs391L/svm.p pt www.cs.utexas.edu/users/mooney/cs391L/svm.p pt Text categorization with Support Vector Machines: learning with many relevant features Text categorization with Support Vector Machines: learning with many relevant features T. Joachims, ECML - 98 T. Joachims, ECML - 98


Download ppt "Classification Slides from Heng Ji Classification Define classes/categories Define classes/categories Label text Label text Extract features Extract."

Similar presentations


Ads by Google