Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning Reading: Chapter 18. 2 Classification Learning Input: a set of attributes and values Output: discrete valued function Learning a continuous.

Similar presentations


Presentation on theme: "Machine Learning Reading: Chapter 18. 2 Classification Learning Input: a set of attributes and values Output: discrete valued function Learning a continuous."— Presentation transcript:

1 Machine Learning Reading: Chapter 18

2 2 Classification Learning Input: a set of attributes and values Output: discrete valued function Learning a continuous valued function is called regression Binary or boolean classification: category is either true or false

3 3 Learning Decision Trees Each node tests the value of an input attribute Branches from the node correspond to possible values of the attribute Leaf nodes supply the values to be returned if that leaf is reached

4 4 Example http://www.ics.uci.edu/~mlearn/MLSummary.html Iris Plant Database Which of 3 classes is a given Iris plant? Iris Setosa Iris Versicolour Iris Virginica Attributes Sepal length in cm Sepal width in cm Petal length in cm Petal width in cm

5 5 Summary Statistics: Min Max Mean SD ClassCorrelation sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) Rules to learn If sepal length > 6 and sepal width > 3.8 and petal length < 2.5 and petal width < 1.5 then class = Iris Setosa If sepal length > 5 and sepal width > 3 and petal length >5.5 and petal width >2 then class = Iris Versicolour If sepal length 3 and petal length  2.5 and ≤ 5.5 and petal width  1.5 and ≤ 2 then class = Iris Virginica

6 6 Data S-lengthS-widthP-lengthP-widthClass 16.836.32.3Versicolour 273.92.41.1Setosa 3232.61.7Verginica 433.42.51.5Verginica 55.53.66.82.4Versicolour 67.74.11.21.4Setosa 76.34.31.61.2Setosa 813.72.82Verginica 964.25.62.1Versicolour

7 7 Data S-lengthS-widthP-lengthClass 16.836.3Versicolour 273.92.4Setosa 3232.6Verginica 433.42.5Verginica 55.53.66.8Versicolour 67.74.11.2Setosa 76.34.31.6Setosa 813.72.8Verginica 964.25.6Versicolour

8 8 Constructing the Decision Tree Goal: Find the smallest decision tree consistent with the examples Find the attribute that best splits examples Form tree with root = best attribute For each value v i (or range) of best attribute Selects those examples with best=v i Construct subtree i by recursively calling decision tree with subset of examples, all attributes except best Add a branch to tree with label=v i and subtree=subtree i

9 9 Construct example decision tree

10 10 Tree and Rules Learned S-length <5.5  5.5 3,4,8P-length  5.6 ≤2.4 1,5,9 2,6,7 If S-length < 5.5., then Verginica If S-length  5.5 and P-length  5.6, then Versicolour If S-length  5.5 and P-length ≤ 2.4, then Setosa

11 11 Comparison of Target and Learned If S-length < 5.5., then Verginica If S-length  5.5 and P-length  5.6, then Versicolour If S-length  5.5 and P-length ≤ 2.4, then Setosa If sepal length > 6 and sepal width > 3.8 and petal length < 2.5 and petal width < 1.5 then class = Iris Setosa If sepal length > 5 and sepal width > 3 and petal length >5.5 and petal width >2 then class = Iris Versicolour If sepal length 3 and petal length  2.5 and ≤ 5.5 and petal width  1.5 and ≤ 2 then class = Iris Virginica

12 12 Text Classification Is text i a finance new article? PositiveNegative

13 13 20 attributes Investors 2 Dow 2 Jones 2 Industrial 1 Average 3 Percent 5 Gain 6 Trading 8 Broader 5 stock 5 Indicators 6 Standard 2 Rolling 1 Nasdaq 3 Early 10 Rest 12 More 13 first 11 Same 12 The 30

14 14 20 attributes Men’s Basketball Championship UConn Huskies Georgia Tech Women Playing Crown Titles Games Rebounds All-America early rolling Celebrates Rest More First The same

15 15 Example stockrollingtheclass 10340other 26835finance 37725other 45714other 58220finance 69425finance 75620finance 80235other 901125finance 1001528other

16 16 Constructing the Decision Tree Goal: Find the smallest decision tree consistent with the examples Find the attribute that best splits examples Form tree with root = best attribute For each value v i (or range) of best attribute Selects those examples with best=v i Construct subtree i by recursively calling decision tree with subset of examples, all attributes except best Add a branch to tree with label=v i and subtree=subtree i

17 17 Choosing the Best Attribute: Binary Classification Want a formal measure that returns a maximum value when attribute makes a perfect split and minimum when it makes no distinction Information theory (Shannon and Weaver 49) H(P(v 1 ),…P(v n ))=∑-P(v i )log 2 P(v i ) H(1/2,1/2)=-1/2log 2 1/2-1/2log 2 1/2=1 bit H(1/100,1/99)=-.01log 2.01-.99log 2.99=.08 bits n i=1

18 18 Information based on attributes = Remainder (A) P=n=10, so H(1/2,1/2)= 1 bit

19 19 Information Gain Information gain (from attribute test) = difference between the original information requirement and new requirement Gain(A)=H(p/p+n,n/p+n)- Remainder(A)

20 20 Example stockrollingtheclass 10340other 26835finance 37725other 45714other 58220finance 69425finance 75620finance 80235other 901125finance 1001528other

21 21 stockrolling <55-10  10 5-10 <5 1,8,9,102,3,4,5,6,7 1,5,6,8 2,3,4,7 9,10 Gain(stock)=1-[4/10H(1/10,3/10)+6/10H(4/10,2/10)]= 1-[.4((-.1* -3.32)-(.3*-1.74))+.6((-.4*-1.32)-(.2*-2.32))]= 1-[.303+.5952]=.105 Gain(rolling)=1-[4/10H(1/2,1/2)+4/10H(1/2,1/2)+2/10H(1/2,1/2)]=0

22 22 Other cases What if class is discrete valued, not binary? What if an attribute has many values (e.g., 1 per instance)?

23 23 Training vs. Testing A learning algorithm is good if it uses its learned hypothesis to make accurate predictions on unseen data Collect a large set of examples (with classifications) Divide into two disjoint sets: the training set and the test set Apply the learning algorithm to the training set, generating hypothesis h Measure the percentage of examples in the test set that are correctly classified by h Repeat for different sizes of training sets and different randomly selected training sets of each size.

24 24

25 25 Division into 3 sets Inadvertent peeking Parameters that must be learned (e.g., how to split values) Generate different hypotheses for different parameter values on training data Choose values that perform best on testing data Why do we need to do this for selecting best attributes?

26 26 Overfitting Learning algorithms may use irrelevant attributes to make decisions For news, day published and newspaper Decision tree pruning Prune away attributes with low information gain Use statistical significance to test whether gain is meaningful

27 27 K-fold Cross Validation To reduce overfitting Run k experiments Use a different 1/k of data for testing each time Average the results 5-fold, 10-fold, leave-one-out

28 28 Ensemble Learning Learn from a collection of hypotheses Majority voting Enlarges the hypothesis space

29 29 Boosting Uses a weighted training set Each example as an associated weight w j  0 Higher weighted examples have higher importance Initially, w j =1 for all examples Next round: increase weights of misclassified examples, decrease other weights From the new weighted set, generate hypothesis h 2 Continue until M hypotheses generated Final ensemble hypothesis = weighted-majority combination of all M hypotheses Weight each hypothesis according to how well it did on training data

30 30 AdaBoost If input learning algorithm is a weak learning algorithm L always returns a hypothesis with weighted error on training slightly better than random Returns hypothesis that classifies training data perfectly for large enough M Boosts the accuracy of the original learning algorithm on training data

31 31 Issues Representation How to map from a representation in the domain to a representation used for learning? Training data How can training data be acquired? Amount of training data How well does the algorithm do as we vary the amount of data? Which attributes influence learning most? Does the learning algorithm provide insight into the generalizations made?


Download ppt "Machine Learning Reading: Chapter 18. 2 Classification Learning Input: a set of attributes and values Output: discrete valued function Learning a continuous."

Similar presentations


Ads by Google