Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Trees A “divisive” method (splits)

Similar presentations


Presentation on theme: "Decision Trees A “divisive” method (splits)"— Presentation transcript:

1

2 Decision Trees A “divisive” method (splits)
Start with “root node” – all in one group Get splitting rules Response often binary Result is a “tree” Example: Loan Defaults Example: Framingham Heart Study Example: Automobile Accidentsnts

3 Recursive Splitting Pr{default} =0.008 Pr{default} =0.012
X1=Debt To Income Ratio Pr{default} =0.0001 Pr{default} =0.003 X2 = Age No default Default

4 Some Actual Data Framingham Heart Study
First Stage Coronary Heart Disease P{CHD} = Function of: Age - no drug yet!  Cholesterol Systolic BP Import

5 Using Default Settings
Example of a “tree”  Using Default Settings Copyright © 2010, SAS Institute Inc. All rights reserved.

6 Example of a “tree”  Using Custom Settings
(Gini splits, N=4 leaves)

7 How to make splits? Contingency tables 95 5 55 45 75 25 100 100 150 50
Heart Disease No Yes Heart Disease No Yes 95 5 55 45 100 75 25 100 Low BP High 180 ? 240? DEPENDENT (effect) INDEPENDENT (no effect)

8 How to make splits? Contingency tables = 2(400/75)+ 2(400/25) = 42.67
Heart Disease No Yes Observed - Expected = 95 75 5 25 55 45 100 Expected Low BP High 180 ? 240? 2(400/75)+ 2(400/25) = 42.67 Compare to c2 tables – Significant! (Why do we say “Significant” ???) DEPENDENT (effect)

9 Beyond reasonable doubt
H0: H1: H0: Innocence H1: Guilt Beyond reasonable doubt P<0.05 95 75 5 25 55 45 H0: No association H1: BP and heart disease are associated P= Framingham Conclusion: Sufficient evidence against the (null) hypothesis of no relationship.

10 Demo 1: Chi-Square for Titanic Survival (SAS)
Alive Dead Crew 3rd Class 2nd Class 1st Class “logworth” = all possible splits c P-value -log10(p-value) Crew vs. other x Crew&3 versus 1& x First versus other x Copyright © 2010, SAS Institute Inc. All rights reserved.

11 Demo 2: Titanic (EM) Open Enterprise Miner File  New  Project
Specify workspace Startup Code – specify data library New Data Source  Titanic Data has class (1,2,3,crew) & MWC Bring in data, Decision Tree (defaults) Copyright © 2010, SAS Institute Inc. All rights reserved.

12  3 possible splits for M,W,C
10 possible splits for class (is that fair?)

13 Demo 3: Framingham (EM) Open Enterprise Miner File  New  Project
Specify workspace Startup Code – specify data library New Data Source  Framingham Firstchd = target, reject obs Explore Firschd (pie) vs. Age (histogram) Bring in data, Decision Tree (defaults) Copyright © 2010, SAS Institute Inc. All rights reserved.

14 Other Sp lit Criteria Gini Diversity Index Shannon Entropy
(1) { A A A A B A B B C B} Pick 2, Pr{different} = 1-Pr{AA}-Pr{BB}-Pr{CC} = (sampling with replacement) (2) { A A B C B A A B C C } = 0.66  group (2) is more diverse (less pure) Shannon Entropy Larger  more diverse (less pure) -Si pi log2(pi) {0.5, 0.4, 0.1}  {0.4, 0.2, 0.3}  (more diverse)

15 How to make splits? Which variable to use? Where to split?
Cholesterol > ____ Systolic BP > _____ Idea – Pick BP cutoff to minimize p-value for c2 Split point data-derived! What does “significance” mean now?

16 Multiple testing a = Pr{ falsely reject hypothesis 2} a a a =
Pr{ falsely reject one or the other} < 2a Desired: probability or less Solution: Compare 2(p-value) to 0.05

17 Validation Traditional stats – small dataset, need all observations to estimate parameters of interest. Data mining – loads of data, can afford “holdout sample” Variation: n-fold cross validation Randomly divide data into n sets Estimate on n-1, validate on 1 Repeat n times, using each set as holdout. Titanic and Framingham examples did not use holdout.

18 Pruning Grow bushy tree on the “fit data”
Classify validation (holdout) data Likely farthest out branches do not improve, possibly hurt fit on validation data Prune non-helpful branches. What is “helpful”? What is good discriminator criterion?

19 Goals Split (or keep split) if diversity in parent “node” > summed diversities in child nodes Prune to optimize Estimates Decisions Ranking in validation data

20 Assessment for: Decisions Minimize incorrect decisions (model versus realized) Estimates Error Mean Square (average error) Ranking C (concordance) statistic = proportion concordant + ½ (proportion tied) Obs number (2, 3) Probability of ( from model) Actual (0,1) Concordant Pairs: (1,3) (1,4) (2,4) Discordant Pairs: (3,5) (4,5) Tied (2,3) 6 ways to get pair with 2 different responses C = 3/6 + (1/2)(1/6) = 7/12=0.5833

21 Accounting for Costs Pardon me (sir, ma’am) can you spare some change?
Say “sir” to male +$2.00 Say “ma’am” to female +$5.00 Say “sir” to female -$1.00 (balm for slapped face) Say “ma’am” to male -$10.00 (nose splint)

22 Including Probabilities
Leaf has Pr(M)=.7, Pr(F)= You say: Sir Ma’am Expected profit is 2(0.7)-1(0.3) = $1.10 if I say “sir” = -$5.50 (a loss) if I say “Ma’am” Weight leaf profits by leaf size (# obsns.) and sum. Prune (and split) to maximize profits. True Gender M F 0.7 (2) 0.7 (-10) 0.3 (-1) 0.3 (5) +$ $5.50

23 Regression Trees Continuous response Y
Predicted response Pi constant in regions i=1, …, 5 Predict 80 Predict 50 X2 Predict 130 Predict 20 Predict 100 X1

24 Regression Trees Predict Pi in cell i. Yij jth response in cell i.
Split to minimize Si Sj (Yij-Pi)2

25 Real data example: Traffic accidents in Portugal*
Y = injury induced “cost to society” * Tree developed by Guilhermina Torrao, (used with permission) NCSU Institute for Transportation Research & Education Help - I ran Into a “tree” Help - I ran Into a “tree”

26 Demo 4: Chemicals (EM) Hypothetical potency versus chemical content
(proportions of A, B, C, D) New Data Source  Chemicals {Sum, Dead} rejected, PotencyTarget, Partition node: training 75%, validation 25% Connect and run tree node, look at results Tree Node, Results. View  Model Variable Importance Copyright © 2010, SAS Institute Inc. All rights reserved.

27 Demo 4 (cont.) Only A and B amounts contribute to splits.
Variable Number of Importance Validation Ratio of Validation Name Splitting Rules Importance to Training Importance B A C NaN D NaN Only A and B amounts contribute to splits. B splits reduce variation more. A reduces it by about 2/3 as much as B. Properties panel  Exported data Explore3D graph X=A Y=B Z=P_Potency Copyright © 2010, SAS Institute Inc. All rights reserved.

28 B A A B A A

29 3 5 1 6 2 4

30 Logistic Regression Logistic – another classifier
Older – “tried & true” method Predict probability of response from input variables (“Features”) Linear regression gives infinite range of predictions 0 < probability < 1 so not linear regression.

31 Logistic Regression Three Logistic Curves ea+bX (1+ea+bX) p X

32 Example: Seat Fabric Ignition
Flame exposure time = X Y=1  ignited, Y=0 did not ignite Y=0, X= 3, 5, , , Y=1, X = , , 15, , 25, 30 Q=(1-p1)(1-p2)(1-p3)(1-p4)p5p6(1-p7)p8p9(1-p10)p11p12p13 p’s all different pi = f(a+bXi) = ea+bXi/(1+ea+bXi) Find a,b to maximize Q(a,b)

33 Given temperature X, compute L(x)=a+bX then p = eL/(1+eL)
Logistic idea: Given temperature X, compute L(x)=a+bX then p = eL/(1+eL) p(i) = ea+bXi/(1+ea+bXi) Write p(i) if response, p(i) if not Multiply all n of these together, find a,b to maximize this “likelihood” Estimated L = ___+ ____X maximize Q(a,b) -2.6 -2.6 a b 0.23 0.23

34 Demo 5: Space Shuttle Challenger

35 Example: Shuttle Missions
O-rings failed in Challenger disaster Prior flights “erosion” and “blowby” in O-rings (6 per mission) Feature: Temperature at liftoff Target: (1) - erosion or blowby vs. no problem (0) L= – (temperature) p = eL/(1+eL)

36

37 Neural Networks Very flexible functions “Hidden Layers”
(0,1) H1 Output = Pr{1} inputs X2 Very flexible functions “Hidden Layers” “Multilayer Perceptron” X3 H2 Logistic function of Logistic functions ** Of data X4 ** (note: Hyperbolic tangent functions are just reparameterized logistic functions)

38 Example: Y = a + b1 H1 + b2 H2 + b3 H3 Y = H H H3 “bias” -1 to 1 “weights” H1 b1 H2 b2 X Y b3 H3 Arrows on right represent linear combinations of “basis functions,” e.g. hyperbolic tangents (reparameterized logistic curves)

39 A Complex Neural Network Surface
(-10) -0.4 3 0.8 X1 X1 (-1) (-13) -1 P P 0.25 X2 X2 -0.9 2.5 0.01 (20) (“biases”)

40 Demo 6: EM part Model comparison node picks Neural Net
From results: view->score code SAS program uses score code (copy & paste)

41 Note: No validation data used – surfaces are unnecessarily complex
Demo 6 (EM & SAS) Framingham Neural Network Note: No validation data used – surfaces are unnecessarily complex “Slice” of 4-D surface at age=25 Code for graphs in NNFramingham.sas Age 25 Nonsmoker Age 25 Smoker .057 .001 .057 .001 Cholest Cholest Sbp Sbp

42 “Slice” at age 55 – are these patterns real?
Framingham Neural Network Surfaces “Slice” at age 55 – are these patterns real? Age 55 Nonsmoker Age 55 Smoker .262 .009 .261 .004 Cholest Cholest Sbp Sbp

43 Handling Neural Net Complexity
Use validation data, stop iterating when fit gets worse on validation data. Problems for Framingham (50-50 split) Fit gets worse at step 1 no effect of inputs Algorithm fails to converge (likely reason: too many parameters) (2) Use regression to omit predictor variables (“features”) and their parameters. This gives previous graphs. (3) Do both (1) and (2). Run Demo 6A in SAS to see the results

44 * Cumulative Lift Chart - Go from leaf of most to least predicted
3.3 1 * Cumulative Lift Chart - Go from leaf of most to least predicted response. - Lift is proportion responding in first p% overall population response rate Predicted response high   low

45 Receiver Operating Characteristic Curve
Cut point 1 Logits of 1s Logits of 0s red black Logits of 1s Logits of 0s red black Select most likely p1% according to model. Call these 1, the rest 0. (call almost everything 0) Y=proportion of all 1’s correctly identified.(Y near 0) X=proportion of all 0’s incorrectly called 1’s (X near 0) (or any model predictor from most to least likely to respond)

46 Receiver Operating Characteristic Curve
Cut point 2 Logits of 1s Logits of 0s red black Logits of 1s Logits of 0s red black Select most likely p2% according to model. Call these 1, the rest 0. Y=proportion of all 1’s correctly identified. X=proportion of all 0’s incorrectly called 1’s

47 Receiver Operating Characteristic Curve
Cut point 3 Logits of 1s Logits of 0s red black Logits of 1s Logits of 0s red black Select most likely p3% according to model. Call these 1, the rest 0. Y=proportion of all 1’s correctly identified. X=proportion of all 0’s incorrectly called 1’s

48 Receiver Operating Characteristic Curve
Cut point 3.5 Logits of 1s Logits of 0s red black Select most likely 50% according to model. Call these 1, the rest 0. Y=proportion of all 1’s correctly identified. X=proportion of all 0’s incorrectly called 1’s

49 Receiver Operating Characteristic Curve
Cut point 4 Logits of 1s Logits of 0s red black Select most likely p5% according to model. Call these 1, the rest 0. Y=proportion of all 1’s correctly identified. X=proportion of all 0’s incorrectly called 1’s

50 Receiver Operating Characteristic Curve
Cut point 5 Logits of 1s Logits of 0s red black Logits of 1s Logits of 0s red black Select most likely p6% according to model. Call these 1, the rest 0. Y=proportion of all 1’s correctly identified. X=proportion of all 0’s incorrectly called 1’s

51 Receiver Operating Characteristic Curve
Cut point 6 Logits of 1s Logits of 0s red black Select most likely p7% according to model. Call these 1, the rest 0.(call almost everything 1) Y=proportion of all 1’s correctly identified. (Y near 1) X=proportion of all 0’s incorrectly called 1’s (X near 1)

52 Why is area under the curve the same as proportion of concordant pairs + (number of ties)/2 ??
Tree Leaf 1: 20 0’s, 60 1’s Leaf 2: 20 0’s, 20 1’s Leaf 3: 60 0’s 20 1’s Cut pt Left of cut, call it  |  and right of cut, call it 0. X = number of incorrect 0 decisions , Y= number of correct 1 decisions X = number of 0’s to the left of cut point, Y = number of 1’s to the left of cut point. (X,Y) on ROC curve Cut 0 X=0 (all points called 0 so none wrong) Y=0 (no 1 decisions) (0, 0) Cut 1 (20)(60) = 1200 ties, 60(20+60)=1200 concordant (20, 60) (20,60) 60 1’s  Areas are number of tied pairs in leaf 1 plus number of concordant pairs from 0’s in leaves 2 and 3 ROC curve (line) splits ties in two. (0,0) Ties Concordant

53 Why is area under the curve the same as number of concordant pairs + (number of ties)/2??
Tree Leaf 1: 20 0’s, 60 1’s Leaf 2: 20 0’s, 20 1’s Leaf 3: 60 0’s 20 1’s Cut pt Left of cut, call it  |  and right of cut, call it 0. Cut point 2: 20 more 1’s, 60 0’s to the right, 20x20=400 ties in leaf point on ROC curve: (40.80) Cut point 3: call everything a 1. All 100 1’s are correctly called, all 0’s incorrectly called. point on ROC curve:(100,100) 20 1’s ROC curve is in terms of proportions so join points are (0,0), (.2,.6), (.4,.8), and (1,1). For Logistic Regression or Neural Nets, each point is a potential cut point for the ROC curve. 20 1’s 60 1’s (1,1) ’s ----(0.4,0.8) -(0.2,0.6) Ties Concordant (0,0)

54 Misclassification Rate Avg. Squared Error Area Under ROC
 Framingham ROC for 4 models and baseline 45 degree line Neural net highlighted, area 0.780 Sensitivity: Pr{ calling a 1 a 1 }=Y coordinate Specificity; Pr{ calling a 0 a 0 } = 1-X. Selected Model Model Node Model Description Misclassification Rate Avg. Squared Error Area Under ROC Y Neural Neural Network 0.780 Tree Decision Tree 0.720 Tree2 Gini Tree N=4 0.675 Reg Regression 0.734

55 A Combined Example Cell Phone Texting Locations Black circle:
Phone moved > 50 feet in first two minutes of texting. Green dot: Phone moved < 50 feet. .

56  Three Models Training Data  Lift Charts Validation Data Lift Charts
Tree Neural Net Logistic Regression  Three Models Training Data  Lift Charts Validation Data Lift Charts Resulting Surfaces

57 Demo 7 (EM): Three Breast Cancer Models
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. 1. Sample code number id number 2. Clump Thickness 3. Uniformity of Cell Size 4. Uniformity of Cell Shape 5. Marginal Adhesion 6. Single Epithelial Cell Size 7. Bare Nuclei 8. Bland Chromatin 9. Normal Nucleoli 10. Mitoses 11. Class: (2 for benign, 4 for malignant)

58 Use Decision Tree to Pick Variables

59 Decision Tree Needs Only BARE & SIZE

60 Can save scoring code from EM
Data A; do bare=0 to 10; do size=0 to 10; (score code from EM) Output; end; end; Model Comparison Node Results Cancer Screening Results: Model Comparison Node Selected Model Model Node Model Description Train Misclassification Rate Train Avg. Squared Error Train Area Under ROC Validation Misclassification Rate Validation Avg. Squared Error Validation Area Under ROC Y Tree Decision Tree 0.975 0.933 Neural Neural Network 0.994 0.985 Reg Regression 0.992 0.987

61 Demo 7 (SAS): Three Breast Cancer Models
Copyright © 2010, SAS Institute Inc. All rights reserved.

62 A B Association Analysis is just elementary probability with new names
0.3 Pr{B}=0.3 A A: Purchase Milk B: Purchase Cereal Pr{A and B} = 0.2 B 0.1 Pr{A} =0.5 0.4 = 1.0

63 Cereal=> Milk Rule B=> A “people who buy B will buy A” Support: Support= Pr{A and B} = 0.2 Independence means that Pr{A|B} = Pr{A} = 0.5 Pr{A} = 0.5 = Expected confidence if there is no relation to B.. Confidence: Confidence = Pr{A|B}=Pr{A and B}/Pr{B}=2/3 ??- Is the confidence in B=>A the same as the confidence in A=>B?? (yes, no) Lift: Lift = confidence / E{confidence} = (2/3) / (1/2) = 1.33 Gain = 33% A B 0.2 0.1 0.3 0.4 B Marketing A to the 30% of people who buy B will result in 33% better sales than marketing to a random 30% of the people.

64 Demo 8 (EM) : Grocery cart items (hypothetical)
Sort Criterion: Confidence Maximum Items: 2 Minimum (single item) Support: 20% item cart bread 1 milk 1 soap 2 meat 3 bread 3 bread 4 cereal 4 milk 4 soup 5 bread 5 cereal 5 milk 5 Association Report Expected Confidence Confidence Support Transaction Relations (%) (%) (%) Lift Count Rule cereal ==> milk milk ==> cereal meat ==> milk bread ==> milk soup ==> milk milk ==> bread bread ==> cereal cereal ==> bread soup ==> cereal cereal ==> soup milk ==> meat milk ==> soup

65 Link Graph  Sort Criterion: Confidence Maximum Items: 3 e.g. milk & cerealbread Minimum Support: 5% Slider bar at 62%

66 Unsupervised Learning
We have the “features” (predictors) We do NOT have the response even on a training data set (UNsupervised) Another name for clustering EM Large number of clusters with k-means (k clusters) Ward’s method to combine (less clusters , say r<k) One more k means for final r-cluster solution.

67 Demo 9 (EM) : Cluster the breast cancer data (disregard target)
Plot clusters and actual target values:

68 Segment Profiler

69 Text Mining Hypothetical collection of news releases (“corpus”) :
release 1: Did the NCAA investigate the basketball scores and vote for sanctions? release 2: Republicans voted for and Democrats voted against it for the win. (etc.) Compute word counts: NCAA basketball score vote Republican Democrat win Release Release

70 Text Mining Mini-Example: Word counts in 16 e-mails
 words  R B T P e a o d E r p s D u o l e u k e r S S c e s b e m V n S c c u c i l t o o a p o o m t d i b c t N L m e W r r e i e c a r e C i e e i e e n o n a l a r A a n c n _ _ t n t n l t s A r t h s V N #SASGF13 Copyright © 2013, SAS Institute Inc. All rights reserved.

71 Variable Prin1 Basketball -.320074 NCAA -.314093 Tournament -.277484
Eigenvalues of the Correlation Matrix Eigenvalue Difference Proportion Cumulative (more) Variable Prin1 Basketball NCAA Tournament Score_V Score_N Wins Speech Voters Liar Election Republican President Democrat 55% of the variation in these 13-dimensional vectors occurs in one dimension. Prin 2 Prin 1

72 Variable Prin1 Basketball -.320074 NCAA -.314093 Tournament -.277484
Eigenvalues of the Correlation Matrix Eigenvalue Difference Proportion Cumulative (more) Variable Prin1 Basketball NCAA Tournament Score_V Score_N Wins Speech Voters Liar Election Republican President Democrat 55% of the variation in these 13-dimensional vectors occurs in one dimension. Prin 2 Prin 1

73 Sports Cluster Politics Cluster

74 R B T P e a o d E r p s D u o C l e u k e r S S c L e s b e m V n S c c u U P c i l t o o a p o o m S r t d i b c t N L m e W r r e T i i e c a r e C i e e i e e n E n o n a l a r A a n c n _ _ t R n t n l t s A r t h s V N (biggest gap) P r i n 1 Sports Documents Politics Documents

75 PROC CLUSTER (single linkage) agrees !

76 Demo 10: (EM) Memory Based Reasoning
(optional) Usual name: Nearest Neighbor Analysis Probe Point 9 blue 7red Classify as BLUE Estimate Pr{Blue} as 9/16

77 Note: Enterprise Miner -> Exported Data -> Explore Chooses colors

78 Score Data (grid) scored by Enterprise Miner Nearest Neighbor method Original Data Original Data scored by Enterprise Miner Nearest Neighbor method

79 Fisher’s Linear Discriminant Analysis
- an older method of classification (optional) Assumes multivariate normal distribution of inputs (features) Computes a “posterior probability” for each of several classes, given the observations. Based on statistical theory

80 Example: One input, (financial stress index) Three Populations:
(pay all credit card debt, pay part, default). Normal distributions with same variance (9). Classify a credit card holder by financial stress index: pay all (stress<20), pay part (20<stress<27.5), default(stress>27.5) Data are hypothetical. Means are 15, 25, 30.

81 Example: Differing variances. Not just closest mean now.
Normal distributions with different variances. Classify a credit card holder by financial stress index: pay all (stress<19.48), pay part (19.48<stress<26.40), default(26.40<stress<36.93), pay part (stress>36.93) Data are hypothetical. Means are 15, 25, 30. Standard Deviations 3,6,3.

82 Example: 20% defaulters, 40% pay part 40% pay all
Normal distributions with “priors.” Classify a credit card holder by financial stress index: pay all (stress<19.48), pay part (19.48<stress<28.33), default(28.33<stress<35.00), pay part (stress>35.00) Data are hypothetical. Means are 15, 25, 30. Standard Deviations 3,6, Population size ratios 2:2:1

83 m = 15, , or 30, s =3 here. The part that changes from one population to another is Fisher’s linear discriminant function. Here it is (mx-m2/2)/s2, that is, (mx-m2/2)/9 . The bigger this is, the more likely it is that X came from the population with mean m. The three probability densities are in the same proportions as their values of exp((mx-m2/2)/9).

84 Example: Stress index X=21. Classify as someone that pays only
part of credit card bill because 21 is closer to 25 than to 15 or 30. Probabilities 0.24, 0.74, 0.02 give more detail. = 15, , or 30, s =here. 59.11* * *108 probability

85 Fisher for credit card data was (mx-m2/2)/9 when variance was 9
in every group. (mx-m2/2)/9 = 1.67 X for mean 15 (mx-m2/2)/9 = 1.67 X for mean 25 (mx-m2/2)/9 = 1.67 X for mean 30 For 2 inputs it again has a linear form ___+___X1+___X2 where the coefficients come from the bivariate mean vector and the 2x2 matrix is assumed constant across populations. When variances differ, formulas become more complicated, discriminant becomes quadratic, and boundaries are not linear.

86 Example: Two inputs, three populations Multivariate normals, same 2x2 covariance matrix. Viewed from above, boundaries appear to be linear.

87 Defaults Pays part only debt Pays in full Income

88 Generated sample: 1000 per group
Debt index Income index Copyright © 2010, SAS Institute Inc. All rights reserved.

89 proc discrim data=a; class group; var X1 X2; run; _ -1 _ -1 _
_ _ _ Constant = -.5 X' COV X Coefficient Vector = COV X j j j Linear Discriminant Function for group Variable Constant X X

90 Number of Observations and Percent Classified into group
From group Total Total Priors Error Count Estimates for group Total Rate Priors

91 Generated sample: 1000 per group
Sample Discriminants Differ Somewhat from Theoretical Defaults Pays part only Debt index Pays in full Income index Copyright © 2010, SAS Institute Inc. All rights reserved.


Download ppt "Decision Trees A “divisive” method (splits)"

Similar presentations


Ads by Google