Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 541: Artificial Intelligence Lecture IX: Supervised Machine Learning Slides Credit: Peter Norvig and Sebastian Thrun.

Similar presentations


Presentation on theme: "CS 541: Artificial Intelligence Lecture IX: Supervised Machine Learning Slides Credit: Peter Norvig and Sebastian Thrun."— Presentation transcript:

1 CS 541: Artificial Intelligence Lecture IX: Supervised Machine Learning Slides Credit: Peter Norvig and Sebastian Thrun

2 Schedule WeekDateTopicReadingHomework** 108/29/2012Introduction & Intelligent AgentCh 1 & 2N/A 209/05/2012Search strategy and heuristic searchCh 3 & 4sHW1 (Search) 309/12/2012Constraint Satisfaction & Adversarial SearchCh 4s & 5 & 6 Teaming Due 409/19/2012Logic: Logic Agent & First Order LogicCh 7 & 8sHW1 due, Midterm Project (Game) 509/26/2012Logic: Inference on First Order LogicCh 8s & 9 610/03/2012 No class 7 10/10/2012Uncertainty and Bayesian Network Ch 13 & Ch14s HW2 (Logic) 8 10/17/2012 Midterm Presentation Midterm Project Due 9 10/24/2012Inference in Baysian NetworkCh 14sHW2 Due, 10 10/31/2012HW3 (PR & ML) 11 11/07/2012Machine Learning Ch 18 & 20 1211/14/2012Probabilistic Reasoning over TimeCh 15 1311/21/2012 No class HW3 due (11/19/2012) 1411/28/2012Markov Decision ProcessCh 16HW4 due 1512/05/2012Reinforcement learning Ch 21 1612/12/2012Final Project PresentationFinal Project Due

3 Re-cap of Lecture VIII  Temporal models use state and sensor variables replicated over time  Markov assumptions and stationarity assumption, so we need  Transition model P(X t |X t-1 )  Sensor model P(E t |X t )  Tasks are filtering, prediction, smoothing, most likely sequence;  All done recursively with constant cost per time step  Hidden Markov models have a single discrete state variable;  Widely used for speech recognition  Kalman filters allow n state variables, linear Gaussian, O(n 3 ) update  Dynamic Bayesian nets subsume HMMs, Kalman filters; exact update intractable  Particle filtering is a good approximate filtering algorithm for DBNs

4 Outline  Machine Learning  Classification (Naïve Bayes)  Classification (Decision Tree)  Regression (Linear, Smoothing)  Linear Separation (Perceptron, SVMs)  Non-parametric classification (KNN)

5 Machine Learning

6  Up until now: how to reason in a give model  Machine learning: how to acquire a model on the basis of data / experience  Learning parameters (e.g. probabilities)  Learning structure (e.g. BN graphs)  Learning hidden concepts (e.g. clustering)

7 Machine Learning Lingo What?ParametersStructureHidden concepts What from?SupervisedUnsupervisedReinforcementSelf-supervised What for?PredictionDiagnosisCompressionDiscovery How?PassiveActiveOnlineOffline Output?ClassificationRegressionClusteringRanking Details??GenerativeDiscriminativeSmoothing

8 Supervised Machine Learning Given a training set: (x 1, y 1 ), (x 2, y 2 ), (x 3, y 3 ), … (x n, y n ) Where each y i was generated by an unknown y = f (x), Discover a function h that approximates the true function f.

9 Classification (Naïve Bayes)

10 Classification Example: Spam Filter  Input: x = email  Output: y = “spam” or “ham”  Setup:  Get a large collection of example emails, each labeled “spam” or “ham”  Note: someone has to hand label all this data!  Want to learn to predict labels of new, future emails  Features: The attributes used to make the ham / spam decision  Words: FREE!  Text Patterns: $dd, CAPS  Non-text: SenderInContacts  … Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. … TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT. 99 MILLION EMAIL ADDRESSES FOR ONLY $99 Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.

11 A Spam Filter  Naïve Bayes spam filter  Data:  Collection of emails, labeled spam or ham  Note: someone has to hand label all this data!  Split into training, held-out, test sets  Classifiers  Learn on the training set  (Tune it on a held-out set)  Test it on new emails Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. … TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT. 99 MILLION EMAIL ADDRESSES FOR ONLY $99 Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.

12 Naïve Bayes for Text  Bag-of-Words Naïve Bayes:  Predict unknown class label (spam vs. ham)  Assume evidence features (e.g. the words) are independent  Generative model  Tied distributions and bag-of-words  Usually, each variable gets its own conditional probability distribution P(F|Y)  In a bag-of-words model  Each position is identically distributed  All positions share the same conditional probs P(W|C)  Why make this assumption? Word at position i, not i th word in the dictionary!

13 General Naïve Bayes  General probabilistic model:  General naive Bayes model:  We only specify how each feature depends on the class  Total number of parameters is linear in n Y F1F1 FnFn F2F2 |Y| parameters n x |F| x |Y| parameters |Y| x |F| n parameterss

14 Example: Spam Filtering  Model:  What are the parameters?  Where do these tables come from? the : 0.0156 to : 0.0153 and : 0.0115 of : 0.0095 you : 0.0093 a : 0.0086 with: 0.0080 from: 0.0075... the : 0.0210 to : 0.0133 of : 0.0119 2002: 0.0110 with: 0.0108 from: 0.0107 and : 0.0105 a : 0.0100... ham : 0.66 spam: 0.33 Counts from examples!

15 Spam Example WordP(w|spam)P(w|ham)Tot SpamTot Ham (prior)0.333330.66666-1.1-0.4 Gary0.000020.00021-11.8-8.9 would0.000690.00084-19.1-16.0 you0.008810.00304-23.8-21.8 like0.000860.00083-30.9-28.9 to0.015170.01339-35.1-33.2 lose0.000080.00002-44.5-44.0 weight0.000160.00002-53.3-55.0 while0.00027 -61.5-63.2 you0.008810.00304-66.2-69.0 sleep0.000060.00001-76.0-80.5 P(spam | words) = e -76 / (e -76 + e -80.5 ) = 98.9%

16 Example: Overfitting  Posteriors determined by relative probabilities (odds ratios): south-west : inf nation : inf morally : inf nicely : inf extent : inf seriously : inf... What went wrong here? screens : inf minute : inf guaranteed : inf $205.00 : inf delivery : inf signature : inf...

17 Generalization and Overfitting  Raw counts will overfit the training data!  Unlikely that every occurrence of “minute” is 100% spam  Unlikely that every occurrence of “seriously” is 100% ham  What about all the words that don’t occur in the training set at all? 0/0?  In general, we can’t go around giving unseen events zero probability  At the extreme, imagine using the entire email as the only feature  Would get the training data perfect (if deterministic labeling)  Wouldn’t generalize at all  Just making the bag-of-words assumption gives us some generalization, but isn’t enough  To generalize better: we need to smooth or regularize the estimates

18 Estimation: Smoothing  Maximum likelihood estimates:  Problems with maximum likelihood estimates:  If I flip a coin once, and it’s heads, what’s the estimate for P(heads)?  What if I flip 10 times with 8 heads?  What if I flip 10M times with 8M heads?  Basic idea:  We have some prior expectation about parameters (here, the probability of heads)  Given little evidence, we should skew towards our prior  Given a lot of evidence, we should listen to the data rgg

19 Estimation: Laplace Smoothing  Laplace’s estimate (extended):  Pretend you saw every outcome k extra times  What’s Laplace with k = 0?  k is the strength of the prior  Laplace for conditionals:  Smooth each condition independently: HHT

20 Estimation: Linear Interpolation  In practice, Laplace often performs poorly for P(X|Y):  When |X| is very large  When |Y| is very large  Another option: linear interpolation  Also get P(X) from the data  Make sure the estimate of P(X|Y) isn’t too different from P(X)  What if  is 0? 1?

21 Real NB: Smoothing  For real classification problems, smoothing is critical  New odds ratios: helvetica : 11.4 seems : 10.8 group : 10.2 ago : 8.4 areas : 8.3... verdana : 28.8 Credit : 28.4 ORDER : 27.2 : 26.9 money : 26.5... Do these make more sense?

22 Tuning on Held-Out Data  Now we’ve got two kinds of unknowns  Parameters: the probabilities P(Y|X), P(Y)  Hyperparameters, like the amount of smoothing to do: k  How to learn?  Learn parameters from training data  Must tune hyperparameters on different data  Why?  For each value of the hyperparameters, train and test on the held-out (validation)data  Choose the best value and do a final test on the test data

23 How to Learn  Data: labeled instances, e.g. emails marked spam/ham  Training set  Held out (validation) set  Test set  Features: attribute-value pairs which characterize each x  Experimentation cycle  Learn parameters (e.g. model probabilities) on training set  Tune hyperparameters on held-out set  Compute accuracy on test set  Very important: never “peek” at the test set!  Evaluation  Accuracy: fraction of instances predicted correctly  Overfitting and generalization  Want a classifier which does well on test data  Overfitting: fitting the training data very closely, but not generalizing well to test data Training Data Held-Out Data Test Data

24 What to Do About Errors?  Need more features– words aren’t enough!  Have you emailed the sender before?  Have 1K other people just gotten the same email?  Is the sending information consistent?  Is the email in ALL CAPS?  Do inline URLs point where they say they point?  Does the email address you by (your) name?  Can add these information sources as new variables in the Naïve Bayes model

25 A Digit Recognizer  Input: x = pixel grids  Output: y = a digit 0-9

26 Example: Digit Recognition  Input: x = images (pixel grids)  Output: y = a digit 0-9  Setup:  Get a large collection of example images, each labeled with a digit  Note: someone has to hand label all this data!  Want to learn to predict labels of new, future digit images  Features: The attributes used to make the digit decision  Pixels: (6,8)=ON  Shape Patterns: NumComponents, AspectRatio, NumLoops  … 0 1 2 1 ??

27 Naïve Bayes for Digits  Simple version:  One feature F ij for each grid position  Boolean features  Each input maps to a feature vector, e.g.  Here: lots of features, each is binary valued  Naïve Bayes model:

28 Learning Model Parameters 10.1 2 3 4 5 6 7 8 9 0 10.01 20.05 3 40.30 50.80 60.90 70.05 80.60 90.50 00.80 10.05 20.01 30.90 40.80 50.90 6 70.25 80.85 90.60 00.80

29 Problem: Overfitting 2 wins!!

30 Classification (Decision Tree) Slides credit: Guoping Qiu

31 Trees  Node  Root  Leaf  Branch  Path  Depth 31

32 Decision Trees  A hierarchical data structure that represents data by implementing a divide and conquer strategy  Can be used as a non-parametric classification method.  Given a collection of examples, learn a decision tree that represents it.  Use this representation to classify new examples 32

33 Decision Trees  Each node is associated with a feature (one of the elements of a feature vector that represent an object);  Each node test the value of its associated feature;  There is one branch for each value of the feature  Leaves specify the categories (classes)  Can categorize instances into multiple disjoint categories – multi-class 33 Yes Humidity No Yes Wind No Yes Outlook RainSunnyOvercast High WeakStrongNormal

34 Decision Trees  Play Tennis Example  Feature Vector = (Outlook, Temperature, Humidity, Wind) 34 Yes Humidity No Yes Wind No Yes Outlook RainSunnyOvercast High WeakStrongNormal

35 Decision Trees 35 Yes Humidity No Yes Wind No Yes Outlook RainSunnyOvercast High WeakStrongNormal Node associated with a feature

36 Decision Trees  Play Tennis Example  Feature values:  Outlook = (sunny, overcast, rain)  Temperature =(hot, mild, cool)  Humidity = (high, normal)  Wind =(strong, weak) 36

37 Decision Trees  Outlook = (sunny, overcast, rain) 37 Yes Humidity No Yes Wind No Yes Outlook RainSunnyOvercast High WeakStrongNormal One branch for each value

38 Decision Trees  Class = (Yes, No) 38 Yes Humidity No Yes Wind No Yes Outlook RainSunnyOvercast High WeakStrongNormal Leaf nodes specify classes

39 Decision Trees  Design Decision Tree Classifier  Picking the root node  Recursively branching 39

40 Decision Trees  Picking the root node  Consider data with two Boolean attributes (A,B) and two classes + and – { (A=0,B=0), - }: 50 examples { (A=0,B=1), - }: 50 examples { (A=1,B=0), - }: 3 examples { (A=1,B=1), + }: 100 examples 40

41 Decision Trees  Picking the root node  Trees looks structurally similar; which attribute should we choose? 41 B - - 01 A + + - - 01 A - - 01 B + + - - 01

42 Decision Trees  Picking the root node  The goal is to have the resulting decision tree as small as possible (Occam’s Razor)  The main decision in the algorithm is the selection of the next attribute to condition on (start from the root node).  We want attributes that split the examples to sets that are relatively pure in one label; this way we are closer to a leaf node.  The most popular heuristics is based on information gain, originated with the ID3 system of Quinlan. 42

43 Entropy  S is a sample of training examples  p + is the proportion of positive examples in S  p - is the proportion of negative examples in S  Entropy measures the impurity of S 43 p+p+

44 44 + - - + + + - - + - + - + + - - + + + - - + - + - - + - - + - + - - + - + - + + - - + + - - - + - + - + + - - + + + - - + - + - + + - - + - + - - + + + - + - + + - + - + + + - - + - + - - + - + - - + - + - + - - - + - - - - + - - + - - - + + + + - - - - - - + + + + + + + - - + - + - + - + + + - - - - - - - - - - - - + + + - - - - - Highly Disorganized High Entropy Much Information Required Highly Organized Low Entropy Little Information Required

45 Information Gain  Gain (S, A) = expected reduction in entropy due to sorting on A  Values (A) is the set of all possible values for attribute A, S v is the subset of S which attribute A has value v, |S| and | S v | represent the number of samples in set S and set S v respectively  Gain(S,A) is the expected reduction in entropy caused by knowing the value of attribute A. 45

46 Information Gain Example: Choose A or B ? 46 A 01 100 + 3 - 100 - B 01 100 + 50- 53- Split on ASplit on B

47 Example Play Tennis Example 47

48 Example 48 Humidity HighNormal 3+,4-6+,1- E =.985 E =.592 Gain(S, Humidity ) =.94 - 7/14 * 0.985 - 7/14 *.592 = 0.151

49 Example 49 Wind WeakStrong 6+2-3+,3- E =.811 E =1.0 Gain(S, Wind ) =.94 - 8/14 * 0.811 - 6/14 * 1.0 = 0.048

50 Example 50 Outlook OvercastRain 3,7,12,13 4,5,6,10,14 3+,2- Sunny 1,2,8,9,11 4+,0-2+,3- 0.0 0.970 Gain(S, Outlook ) = 0.246

51 Example Pick Outlook as the root 51 Outlook OvercastRain Sunny Gain(S, Wind) = 0.048 Gain(S, Humidity) = 0.151 Gain(S, Temperature) = 0.029 Gain(S, Outlook) = 0.246

52 Example Pick Outlook as the root 52 Outlook 3,7,12,13 4,5,6,10,14 3+,2- 1,2,8,9,11 4+,0- 2+,3- Yes ? Continue until: Every attribute is included in path, or, all examples in the leaf have same label Overcast RainSunny ?

53 Example 53 Outlook 3,7,12,13 1,2,8,9,11 4+,0- 2+,3- Yes ? Overcast RainSunny Gain (S sunny, Humidity) =.97-(3/5) * 0-(2/5) * 0 =.97 Gain (S sunny, Temp) =.97- 0-(2/5) *1 =.57 Gain (S sunny, Wind) =.97-(2/5) *1 - (3/5) *.92 =.02

54 Example 54 Outlook Yes Humidity Overcast RainSunny Gain (S sunny, Humidity) =.97-(3/5) * 0-(2/5) * 0 =.97 Gain (S sunny, Temp) =.97- 0-(2/5) *1 =.57 Gain (S sunny, Wind) =.97-(2/5) *1 - (3/5) *.92 =.02 No Yes NormalHigh

55 Example 55 Outlook Yes Humidity Overcast RainSunny Gain (S rain, Humidity) = Gain (S rain, Temp) = Gain (S rain, Wind) = No Yes NormalHigh 4,5,6,10,14 3+,2- ?

56 Example 56 Outlook Yes Humidity Overcast RainSunny No Yes NormalHigh Wind No Yes WeakStrong

57 Tutorial/Exercise Questions 57 An experiment has produced the following 3d feature vectors X = (x 1, x 2, x 3 ) belonging to two classes. Design a decision tree classifier to class an unknown feature vector X = (1, 2, 1). X = (x 1, x 2, x 3 ) x 1 x 2 x 3 Classes 1111 1112 1111 2112 2121 2222 2221 2212 1222 1121 121= ?

58 Regression (Linear, smoothing)

59 Regression  Start with very simple example  Linear regression  What you learned in high school math  From a new perspective  Linear model  y = m x + b  h w (x) = y = w 1 x + w 0  Find best values for parameters  “maximize goodness of fit”  “maximize probability” or “minimize loss”

60 Regression: Minimizing Loss  Assume true function f is given by y = f (x) = m x + b + noise where noise is normally distributed  Then most probable values of parameters found by minimizing squared-error loss: Loss(h w ) = Σ j (y j – h w (x j )) 2

61 Regression: Minimizing Loss

62 Choose weights to minimize sum of squared errors

63 Regression: Minimizing Loss

64 y = w 1 x + w 0 Linear algebra gives an exact solution to the minimization problem

65 Linear Algebra Solution

66 Don’t Always Trust Linear Models

67 Regression by Gradient Descent w = any point loop until convergence do: for each w i in w do: w i = w i – α ∂Loss(w) ∂ w i

68 Multivariate Regression  You learned this in math class too  h w (x) = w ∙ x = w x T = Σ i w i x i  The most probable set of weights, w* (minimizing squared error):  w* = (X T X) -1 X T y

69 Overfitting  To avoid overfitting, don’t just minimize loss  Maximize probability, including prior over w  Can be stated as minimization:  Cost(h) = EmpiricalLoss(h) + λ Complexity(h)  For linear models, consider  Complexity(h w ) = L q (w) = ∑ i | w i | q  L 1 regularization minimizes sum of abs. values  L 2 regularization minimizes sum of squares

70 Regularization and Sparsity L 1 regularizationL 2 regularization Cost(h) = EmpiricalLoss(h) + λ Complexity(h)

71 Linear Separation Perceptron, SVM

72 Linear Separator

73 Perceptron

74 Perceptron Algorithm  Start with random w 0, w 1  Pick training example  Update ( α is learning rate)  w 1  w 1 + α (y-f(x))x  w 0  w 0 + α (y-f(x))  Converges to linear separator (if exists)  Picks “a” linear separator (a good one?)

75 What Linear Separator to Pick?

76 Maximizes the “margin” Support Vector Machines

77 Support vector machines Find hyperplane that maximizes the margin between the positive and negative examples Margin Support vectors C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition Distance between point and hyperplane: For support vectors, Therefore, the margin is 2 / ||w||

78 Finding the maximum margin hyperplane 1. Maximize margin 2/||w|| 2. Correctly classify all training data:  Quadratic optimization problem:  Minimize Subject to y i (w·x i +b) ≥ 1 C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

79 Finding the maximum margin hyperplane Solution: C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition Support vector learned weight

80 Finding the maximum margin hyperplane Solution: b = y i – w·x i for any support vector Classification function (decision boundary): Notice that it relies on an inner product between the test point x and the support vectors x i Solving the optimization problem also involves computing the inner products x i · x j between all pairs of training points C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

81 Datasets that are linearly separable work out great: But what if the dataset is just too hard? We can map it to a higher-dimensional space: 0x 0 x 0 x x2x2 Nonlinear SVMs Slide credit: Andrew Moore

82 General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) Nonlinear SVMs Slide credit: Andrew Moore

83 The kernel trick: instead of explicitly computing the lifting transformation φ(x), define a kernel function K such that K(x i, x j ) = φ(x i ) · φ(x j ) (to be valid, the kernel function must satisfy Mercer’s condition) This gives a nonlinear decision boundary in the original feature space: Nonlinear SVMs C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

84 Nonlinear kernel: Example Consider the mapping x2x2

85 Non-parametric classification

86 Nonparametric Models  If the process of learning good values for parameters is prone to overfitting, can we do without parameters?

87 Nearest-Neighbor Classification  Nearest neighbor for digits:  Take new image  Compare to all training images  Assign based on closest example  Encoding: image is vector of intensities:  What’s the similarity function?  Dot product of two images vectors?  Usually normalize vectors so ||x|| = 1  min = 0 (when?), max = 1 (when?)

88 Earthquakes and Explosions Using logistic regression (similar to linear regression) to do linear classification

89 K =1 Nearest Neighbors Using nearest neighbors to do classification

90 K =5 Nearest Neighbors Even with no parameters, you still have hyperparameters!

91 Curse of Dimensionality Average neighborhood size for 10-nearest neighbors, n dimensions, 1M uniform points

92 Curse of Dimensionality Proportion of points that are within the outer shell, 1% of thickness of the hypercube

93 Summary  Machine Learning  Classification (Naïve Bayes)  Regression (Linear, Smoothing)  Linear Separation (Perceptron, SVMs)  Non-parametric classification (KNN)

94 Machine Learning Lingo What?ParametersStructureHidden concepts What from?SupervisedUnsupervisedReinforcementSelf-supervised What for?PredictionDiagnosisCompressionDiscovery How?PassiveActiveOnlineOffline Output?ClassificationRegressionClusteringRanking Details??GenerativeDiscriminativeSmoothing


Download ppt "CS 541: Artificial Intelligence Lecture IX: Supervised Machine Learning Slides Credit: Peter Norvig and Sebastian Thrun."

Similar presentations


Ads by Google