Additive Models,Trees,and Related Models

Slides:



Advertisements
Similar presentations
Chapter 7 Classification and Regression Trees
Advertisements

Random Forest Predrag Radenković 3237/10
Linear Regression.
Brief introduction on Logistic Regression
Lectures 17,18 – Boosting and Additive Trees Rice ECE697 Farinaz Koushanfar Fall 2006.
Pattern Recognition and Machine Learning
Chapter 8 – Logistic Regression
Supervised Learning Recap
Model Assessment, Selection and Averaging
Chapter 7 – Classification and Regression Trees
Chapter 7 – Classification and Regression Trees
Chapter 4: Linear Models for Classification
Basis Expansion and Regularization Presenter: Hongliang Fei Brian Quanz Brian Quanz Date: July 03, 2008.
Linear Methods for Regression Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Missing at Random (MAR)  is unknown parameter of the distribution for the missing- data mechanism The probability some data are missing does not depend.
Additive Models and Trees
Basis Expansions and Regularization Based on Chapter 5 of Hastie, Tibshirani and Friedman.
End of Chapter 8 Neil Weisenfeld March 28, 2005.
Chapter 6: Multilayer Neural Networks
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Comp 540 Chapter 9: Additive Models, Trees, and Related Methods
Chapter 9 Additive Models,Trees,and Related Models
Ensemble Learning (2), Tree and Forest
Decision Tree Models in Data Mining
Radial Basis Function Networks
Introduction to Directed Data Mining: Decision Trees
Collaborative Filtering Matrix Factorization Approach
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
Outline Separating Hyperplanes – Separable Case
Model Inference and Averaging
Chapter 9 – Classification and Regression Trees
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression Regression Trees.
Lectures 15,16 – Additive Models, Trees, and Related Methods Rice ECE697 Farinaz Koushanfar Fall 2006.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Additive Models , Trees , and Related Models Prof. Liqing Zhang Dept. Computer Science & Engineering, Shanghai Jiaotong University.
Classification and Regression Trees
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Eco 6380 Predictive Analytics For Economists Spring 2016 Professor Tom Fomby Department of Economics SMU.
Hierarchical Mixture of Experts Presented by Qi An Machine learning reading group Duke University 07/15/2005.
Basis Expansions and Generalized Additive Models Basis expansion Piecewise polynomials Splines Generalized Additive Model MARS.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
CPH Dr. Charnigo Chap. 9 Notes To begin with, have a look at Figure 9.5 on page 315. One can get an intuitive feel for how a tree works by examining.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Chapter 7. Classification and Prediction
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Deep Feedforward Networks
Ch 14. Combining Models Pattern Recognition and Machine Learning, C. M
Trees, bagging, boosting, and stacking
Ch9: Decision Trees 9.1 Introduction A decision tree:
Additive Models,Trees,and Related Models
Boosting and Additive Trees
Data Mining Lecture 11.
Statistical Learning Dong Liu Dept. EEIS, USTC.
Hidden Markov Models Part 2: Algorithms
Collaborative Filtering Matrix Factorization Approach
Ying shen Sse, tongji university Sep. 2016
Statistical Learning Dong Liu Dept. EEIS, USTC.
Generally Discriminant Analysis
Basis Expansions and Generalized Additive Models (2)
Text Categorization Berlin Chen 2003 Reference:
STT : Intro. to Statistical Learning
Presentation transcript:

Additive Models,Trees,and Related Models Prof. Liqing Zhang Dept. Computer Science & Engineering, Shanghai Jiaotong University

Introduction 9.1: Generalized Additive Models 9.2: Tree-Based Methods 9.3: PRIM: Bump Hunting 9.4: MARS: Multivariate Adaptive Regression Splines 9.5: HME: Hierarchical Mixture of Experts

9.1 Generalized Additive Models In the regression setting, a generalized additive models has the form: Here s are unspecified smooth and nonparametric functions. Instead of using LBE(Linear Basis Expansion) in chapter 5, we fit each function using a scatter plot smoother(e.g. a cubic smoothing spline)

GAM(cont.) For two-class classification, the additive logistic regression model is: Here

GAM(cont) In general, the conditional mean U(x) of a response y is related to an additive function of the predictors via a link function g: Examples of classical link functions: Identity: Logit: Probit: Log:

Fitting Additive Models The additive model has the form: Here we have Given observations , a criterion like penalized sum squares can be specified for this problem: Where are tuning parameters.

FAM(cont.) Conclusions: The solution to minimize PRSS is cubic splines, however without further restrictions the solution is not unique. If holds, it is easy to see that : If in addition to this restriction, the matrix of input values has full column rank, then (9.7) is a strict convex criterion and has an unique solution. If the matrix is singular, then the linear part of fj cannot be uniquely determined. (Buja 1989)

Learning GAM: Backfitting Backfitting algorithm Initialize: Cycle: j = 1,2,…, p,…,1,2,…, p,…, (m cycles) Until the functions change less than a prespecified threshold

Backfitting: Points to Ponder Computational Advantage? Convergence? How to choose fitting functions?

FAM(cont.) Algorithm 9.1 The Backfitting Algorithm for Additive Models. Initialize: Repeat : until the functions change less than a prespecified threshold.

Additive Logistic Regression The generalized additive logistic model has the form The functions are estimated by a backfitting within a Newton-Raphson procedure. 2018/8/30

Logistic Regression Model the class posterior in terms of K-1 log-odds Decision boundary is set of points Linear discriminant function for class k Classify to the class with the largest value for its k(x)

Logistic Regression con’t Parameters estimation Objective function IRLS (iteratively reweighted least squares) Particularly, for two-class case, using Newton-Raphson algorithm to solve the equation, the objective function:

Logistic Regression con’t

Logistic Regression con’t

Logistic Regression con’t

Logistic Regression con’t

Logistic Regression con’t Feature selection Find a subset of the variables that are sufficient for explaining their joint effect on the response. One way is to repeatedly drop the least significant coefficient, and refit the model until no further terms can be dropped Another strategy is to refit each model with one variable removed, and perform an analysis of deviance to decide which one variable to exclude Regularization Maximum penalized likelihood Shrinking the parameters via an L1 constraint, imposing a margin constraint in the separable case

Additive Logistic Regression: Backfitting Fitting logistic regression Fitting additive logistic regression 1. 1. where 2. 2. Iterate: Iterate: a. a. b. b. c. Using weighted least squares to fit a linear model to zi with weights wi, give new estimates c. Using weighted backfitting algorithm to fit an additive model to zi with weights wi, give new estimates 3. Continue step 2 until converge 3.Continue step 2 until converge

Additive Logistic Regression Compute starting values: , where , the sample proportion of ones, and set . Define and . Iterate: Construct the working target variable Construct weights Fit an additive model to the targets zi with weights wi, using a weighted backfitting algorithm. This gives new estimates Continue step 2. until the change in the functions falls below a prespecified threshold. Algorithm 9.2 Local Scoring Algorithm for the Additive Logistic Regression Model.

SPAM Detection via Additive Logistic Regression Input variables (predictors): 48 quantitative predictors—the percentage of words in the email that match a given word. Examples include business, address, internet, free, and george. The idea was that these could be customized for individual users. 6 quantitative predictors—the percentage of characters in the email that match a given character. The characters are ch;, ch(, ch[, ch!, ch$, and ch#. The average length of uninterrupted sequences of capital letters: CAPAVE. The length of the longest uninterrupted sequence of capital letters: CAPMAX. The sum of the length of uninterrupted sequences of capital letters: CAPTOT. Output variable: SPAM (1) or Email (0) fj’s are taken as cubic smoothing splines

2018/8/30 Additive Models

2018/8/30 Additive Models

2018/8/30 Additive Models

SPAM Detection: Results True Class Predicted Class Email (0) SPAM (1) 58.5% 2.5% 2.7% 36.2% Sensitivity: Probability of predicting spam given true state is spam = Specificity: Probability of predicting email given true state is email =

GAM: Summary Useful flexible extensions of linear models Backfitting algorithm is simple and modular Interpretability of the predictors (input variables) are not obscured Not suitable for very large data mining applications (why?)

Introduction 9.1: Generalized Additive Models 9.2: Tree-Based Methods 9.3: PRIM: Bump Hunting 9.4: MARS: Multivariate Adaptive Regression Splines 9.5: HME: Hierarchical Mixture of Experts

9.2 Tree-Based Method Background: Tree-based models partition the feature space into a set of rectangles, and then fit a simple model(like a constant) in each one. A popular method for tree-based regression and classification is called CART(classification and regression tree)

CART

CART Example: Let’s consider a regression problem: continuous response Y inputs X1 and X2. To simplify matters, we consider the partition shown by the top right panel of figure. The corresponding regression model predicts Y with a constant Cm in region Rm: For illustration, we choose C1=-5, C2=-7, C3=0,C4=2, C5=4 in the bottom right panel in figure 9.2.

Regression Tree Suppose we have a partition into M regions: R1 R2 …..RM. We model the response Y with a constant Cm in each region: If we adopt as our criterion minimization of RSS, it is easy to see that:

Regression Tree(cont.) Finding the best binary partition in term of minimum RSS is computationally infeasible A greedy algorithm is used. Here

Regression Tree(cont.) For any choice j and s, the inner minimization is solved by: For each splitting variable Xj , the determination of split point s can be done by scanning through all of the input, determination of the best pair (j,s) is feasible. Partition the data into two regions and repeat the splitting progress in each of the two regions.

Regression Tree(cont.) We index terminal nodes by m, with node m representing region Rm. Let |T| denotes the number of terminal notes in T. We define the cost complexity criterion:

Classification Tree : the proportion of class k on mode : the majority class on node m Instead of Qm(T) defined in(9.15) in regression, we have different measures Qm(T) of node impurity including the following: Misclassification Error: Gini Index: Cross-entropy (deviance):

Classification Tree(cont.) Example: For two classes, if p is the proportion of in the second class, these three measures are: 1-max(p,1-p) , 2p(1-p), -plog(p) -(1-p)log(1-p)

2018/8/30 Additive Models

2018/8/30 Additive Models

Introduction 9.1: Generalized Additive Models 9.2: Tree-Based Methods 9.3: PRIM: Bump Hunting 9.4: MARS: Multivariate Adaptive Regression Splines 9.5: HME: Hierarchical Mixture of Experts

9.3 PRIM: Bump Hunting The patient rule induction method(PRIM) finds boxes in the feature space and seeks boxes in which the response average is high. Hence it looks for maxima in the target function, an exercise known as bump hunting.

PRIM(cont.)

PRIM(cont.)

PRIM(cont.)

Introduction 9.1: Generalized Additive Models 9.2: Tree-Based Methods 9.3: PRIM:Bump Hunting 9.4: MARS: Multivariate Adaptive Regression Splines 9.5: HME: Hierarchical Mixture of Experts

MARS: Multivariate Adaptive Regression Splines MARS uses expansions in piecewise linear basis functions of the form: (x-t)+ and (t-x)+. We call the two functions a reflected pair.

MARS(cont.) The idea of MARS is to form reflected pairs for each input Xj with knots at each observed value Xij of that input. Therefore, the collection of basis function is: The model has the form: where each hm(x) is a function in C or a product of two or more such functions.

MARS(cont.) We start with only the constant function h0(x)=1 in the model set M and all functions in the set C are candidate functions. At each stage we add to the model set M the term of the form: that produces the largest decrease in training error. The process is continued until the model set M contains some preset maximum number of terms.

MARS(cont.)

MARS(cont.)

MARS(cont.) This progress typically overfits the data and so a backward deletion procedure is applied. The term whose removal causes the smallest increase in RSS is deleted from the model at each step, producing an estimated best model of each size . Generalized cross validation is applied to estimate the optimal value of The value M(λ) is the effective number of parameters in the model.

Introduction 9.1: Generalized Additive Models 9.2: Tree-Based Methods 9.3: PRIM: Bump Hunting 9.4: MARS: Multivariate Adaptive Regression Splines 9.5: HME: Hierarchical Mixture of Experts

Hierarchical Mixtures of Experts The HME method can be viewed as a variant of the tree-based methods. Difference: The main difference is that the tree splits are not hard decisions but rather soft probabilistic ones. In an HME a linear(or logistic regression) model is fitted in each terminal node, instead of a constant as in the CART.

HME(cont.) A simple two-level HME model is shown in Figure. It can be viewed as a tree with soft splits at each non-terminal node.

HME(cont.) The terminal node is called expert and the non-terminal node is called gating networks. The idea each expert provides a prediction about the response Y, these are combined together by the gating networks. 

HME(cont.) The top gating network has the output: where each is a vector of unknown parameters. This repres-ents a soft K-way split (K=2 in Figure 9.13.) Each is the probability of assigning an observation with feature vector x to the jth branch.

HME(cont.) At the second level, the gating networks have a similar form: At the experts, we have a model for the response variable of the form: This differs according to the problem.

HME(cont.) Regression: The Gaussian linear regression model is used, with Classification: The linear logistic regression model is used: Denoting the collection of all parameters by , the total probability that is This is a mixture model, with the mixture probabilities determined by the gating network models.

THE END of Talk