Download presentation
Presentation is loading. Please wait.
Published byLester Sutton Modified over 9 years ago
1
Machine Learning in Practice Lecture 8 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute
2
Plan for the Day Announcements Should be finalizing plans for term project Weka helpful hints Spam Dataset Overcoming some limits of Linear Functions Discussing ordinal attributes in light of linear functions
3
Weka Helpful Hints
4
Feature Selection Feature selection algorithms pick out a subset of the features that work best Usually they evaluate each feature in isolation * Click here to start setting up feature selection
5
Feature Selection Feature selection algorithms pick out a subset of the features that work best Usually they evaluate each feature in isolation * Now click here
6
Feature Selection * Now click here.
7
Feature Selection
8
* Now pick your base classifier just like before
9
Feature Selection * Finally you will configure the feature selection
10
Setting Up Feature Selection * First click here.
11
Setting Up Feature Selection * Select CHiSquaredAttributeEval
12
Setting Up Feature Selection * Now click here.
13
Setting Up Feature Selection * Select Ranker
14
Setting Up Feature Selection * Now click here
15
Setting Up Feature Selection * Set the number of features you want
16
Setting Up Feature Selection The number you pick should not be larger than the number of features available The number should not be larger than the number of coded examples you have
17
Examining Which Features are Most Predictive You can find a ranked list of features in the Performance Report if you use feature selection * Predictiveness score * Frequency
18
Spam Data Set
19
Word frequencies Runs of $, !, Capitalization All numeric Spam versus NotSpam * Which algorithm will work best?
20
Spam Data Set Decision Trees (.85 Kappa) SMO (linear function) (.79 Kappa) Naïve Bayes (.6 Kappa)
21
What did SMO learn?
22
Decision tree model
23
More on Linear Functions … exploring the idea of nonlinearity
24
Limits of linear functions
25
Numeric Prediction with the CPU Data Predicting CPU performance from computer configuration All attributes are numeric as well as the output
26
Numeric Prediction with the CPU Data Could discretize the output and predict good performance, mediocre performance, or bad performance Numeric prediction allows you to make arbitrarily many distinctions
27
Linear Regression R-squared=.87
28
Outliers ** Notice that here it’s the really high values that fit the line the least well. That’s not always the case.
29
The two most highly weighted features
30
Exploring the Attribute Space * Identify outliers with respect to typical attribute values.
31
The two most highly weighted features Within 1 standard deviation of the mean value
32
Trees for Numeric Prediction Looks like we may need a representation that allows for a nonlinear solution Regression trees can handle a combination of numeric and nominal attributes M5P: computes a linear regression function at each leaf node of the tree Look at CPU performance data and compare a simple linear regression (R =.93) with M5P (R =.98)
33
Results on CPU data with M5P More Data Here Biggest Outliers Here
34
Results with M5P More Data Here Biggest Outliers Here
35
Multi-Layer Networks can learn arbitrarily complex functions
36
Multilayer Perceptron
37
Best Results So Far
38
Forcing a Linear Function Note that it weights the features differently than the linear regression Partly because of normalization Regression trees split on MMAX NN emphasizes MMIN
39
Review of Ordinal Attributes
40
Feature Space Design for Linear Functions Often features will be numeric Continuous values May be more likely to generalize properly with discretized values We discussed the fact that you lose ordering and distance With respect to linear functions, it may be more important that you lose the ability to think in terms of ranges Explicitly coding ranges allows for a simple form of nonlinearity
41
Ordinal Values Weka technically does not have ordinal attributes But you can simulate them with “temperature coding”! Try to represent “If X less than or equal to.35”?.2.25.28.31.35.45.47.52.6.63 ABCD A A or B A or B or C A or B or C or D
42
Ordinal Values Weka technically does not have ordinal attributes But you can simulate them with “temperature coding”! Try to represent “If X less than or equal to.35”?.2.25.28.31.35.45.47.52.6.63 ABCD A A or B A or B or C A or B or C or D
43
Ordinal Values Weka technically does not have ordinal attributes But you can simulate them with “temperature coding”! Try to represent “If X less than or equal to.35”?.2.25.28.31.35.45.47.52.6.63 ABCD A A or B A or B or C A or B or C or D Now how would you represent X <=.35?
44
Ordinal Values Weka technically does not have ordinal attributes But you can simulate them with “temperature coding”! Try to represent “If X less than or equal to.35”?.2.25.28.31.35.45.47.52.6.63 ABCD A A or B A or B or C A or B or C or D Now how would you represent X <=.35? Feat2 = 1
45
Take Home Message Linear functions cannot learn interactions between attributes If you need to account for interactions: Multiple layers Tree-like representations Attributes that represent ranges Later in the semester we’ll talk about other approaches
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.