Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, February 28, 2000.

Slides:



Advertisements
Similar presentations
On-line learning and Boosting
Advertisements

Data Mining Classification: Alternative Techniques
Ensemble Methods An ensemble method constructs a set of base classifiers from the training data Ensemble or Classifier Combination Predict class label.
Data Mining Classification: Alternative Techniques
Longin Jan Latecki Temple University
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Dynamic Face Recognition Committee Machine Presented by Sunny Tang.
Ensemble Learning: An Introduction
Adaboost and its application
Introduction to Boosting Aristotelis Tsirigos SCLT seminar - NYU Computer Science.
Machine Learning: Ensemble Methods
Sparse vs. Ensemble Approaches to Supervised Learning
CS 4700: Foundations of Artificial Intelligence
Ensemble Learning (2), Tree and Forest
For Better Accuracy Eick: Ensemble Learning
Online Learning Algorithms
Machine Learning CS 165B Spring 2012
Issues with Data Mining
Artificial Neural Networks
Bias and Variance Two ways to measure the match of alignment of the learning algorithm to the classification problem involve the bias and variance. Bias.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 08 February 2007.
Chapter 9 Neural Network.
Machine Learning Chapter 4. Artificial Neural Networks
CS 391L: Machine Learning: Ensembles
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
ECE 8443 – Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML and Bayesian Model Comparison Combining Classifiers Resources: MN:
Kansas State University Department of Computing and Information Sciences CIS 690: Implementation of High-Performance Data Mining Systems Thursday, November.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 05 March 2008 William.
Benk Erika Kelemen Zsolt
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Kansas State University Department of Computing and Information Sciences CIS 690: Implementation of High-Performance Data Mining Systems Friday, 23 May.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Computational Intelligence: Methods and Applications Lecture 36 Meta-learning: committees, sampling and bootstrap. Włodzisław Duch Dept. of Informatics,
Ensemble Methods: Bagging and Boosting
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, February 2, 2000.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Monday, 03 March 2008 William.
NIMIA Crema, Italy1 Identification and Neural Networks I S R G G. Horváth Department of Measurement and Information Systems.
Classification Ensemble Methods 1
1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Committee.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
1 Machine Learning: Ensemble Methods. 2 Learning Ensembles Learn multiple alternative definitions of a concept using different training data or different.
1 Machine Learning Lecture 8: Ensemble Methods Moshe Koppel Slides adapted from Raymond J. Mooney and others.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Ensemble Classifiers.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning: Ensemble Methods
Machine Learning Supervised Learning Classification and Regression
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Machine Learning: Ensembles
COMP61011 : Machine Learning Ensemble Models
Ensemble Learning Introduction to Machine Learning and Data Mining, Carla Brodley.
Data Mining Lecture 11.
Combining Base Learners
Machine Learning Today: Reading: Maria Florina Balcan
Data Mining Practical Machine Learning Tools and Techniques
Model Combination.
Ensemble learning Reminder - Bagging of Trees Random Forest
A task of induction to find patterns
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
CS 391L: Machine Learning: Ensembles
Presentation transcript:

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, February 28, 2000 William H. Hsu Department of Computing and Information Sciences, KSU Readings: “Modular and Hierarchical Learning Systems”, M. I. Jordan and R. Jacobs (Reference) Section 7,5, Mitchell (Reference) Lectures 21-22, CIS 798 (Fall, 1999) Artificial Neural Networks Discussion (4 of 4): Modularity in Neural Learning Systems Lecture 16

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Lecture Outline Outside Reading –Section 7.5, Mitchell –Section 5, MLC++ manual, Kohavi and Sommerfield –Lectures 21-22, CIS 798 (Fall, 1999) This Week’s Paper Review: “Bagging, Boosting, and C4.5”, J. R. Quinlan Combining Classifiers –Problem definition and motivation: improving accuracy in concept learning –General framework: collection of weak classifiers to be improved Examples of Combiners (Committee Machines) –Weighted Majority (WM), Bootstrap Aggregating (Bagging), Stacked Generalization (Stacking), Boosting the Margin –Mixtures of experts, Hierarchical Mixtures of Experts (HME) Committee Machines –Static structures: ignore input signal –Dynamic structures (multi-pass): use input signal to improve classifiers

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Combining Classifiers Problem Definition –Given Training data set D for supervised learning D drawn from common instance space X Collection of inductive learning algorithms, hypothesis languages (inducers) –Hypotheses produced by applying inducers to s(D) s: X vector  X’ vector (sampling, transformation, partitioning, etc.) Can think of hypotheses as definitions of prediction algorithms (“classifiers”) –Return: new prediction algorithm (not necessarily  H) for x  X that combines outputs from collection of prediction algorithms Desired Properties –Guarantees of performance of combined prediction –e.g., mistake bounds; ability to improve weak classifiers Two Solution Approaches –Train and apply each inducer; learn combiner function(s) from result –Train inducers and combiner function(s) concurrently

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Combining Classifiers: Ensemble Averaging Intuitive Idea –Combine experts (aka prediction algorithms, classifiers) using combiner function –Combiner may be weight vector (WM), vote (bagging), trained inducer (stacking) Weighted Majority (WM) –Weights each algorithm in proportion to its training set accuracy –Use this weight in performance element (and on test set predictions) –Mistake bound for WM Bootstrap Aggregating (Bagging) –Voting system for collection of algorithms –Training set for each member: sampled with replacement –Works for unstable inducers (search for h sensitive to perturbation in D) Stacked Generalization (aka Stacking) –Hierarchical system for combining inducers (ANNs or other inducers) –Training sets for “leaves”: sampled with replacement; combiner: validation set Single-Pass: Train Classification and Combiner Inducers Serially Static Structures: Ignore Input Signal

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Principle: Improving Weak Classifiers Mixture Model First Classifier Second Classifier Both Classifiers

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Framework: Data Fusion and Mixtures of Experts What Is A Weak Classifier? –One not guaranteed to do better than random guessing (1 / number of classes) –Goal: combine multiple weak classifiers, get one at least as accurate as strongest Data Fusion –Intuitive idea Multiple sources of data (sensors, domain experts, etc.) Need to combine systematically, plausibly –Solution approaches Control of intelligent agents: Kalman filtering General: mixture estimation (sources of data  predictions to be combined) Mixtures of Experts –Intuitive idea: “experts” express hypotheses (drawn from a hypothesis space) –Solution approach (next time) Mixture model: estimate mixing coefficients Hierarchical mixture models: divide-and-conquer estimation method

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Weight-Based Combiner –Weighted votes: each prediction algorithm (classifier) h i maps from x  X to h i (x) –Resulting prediction in set of legal class labels –NB: as for Bayes Optimal Classifier, resulting predictor not necessarily in H Intuitive Idea –Collect votes from pool of prediction algorithms for each training example –Decrease weight associated with each algorithm that guessed wrong (by a multiplicative factor) –Combiner predicts weighted majority label Performance Goals –Improving training set accuracy Want to combine weak classifiers Want to bound number of mistakes in terms of minimum made by any one algorithm –Hope that this results in good generalization quality Weighted Majority: Idea

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Bagging: Idea Bootstrap Aggregating aka Bagging –Application of bootstrap sampling Given: set D containing m training examples Create S[i] by drawing m examples at random with replacement from D S[i] of size m: expected to leave out 0.37 of examples from D –Bagging Create k bootstrap samples S[1], S[2], …, S[k] Train distinct inducer on each S[i] to produce k classifiers Classify new instance by classifier vote (equal weights) Intuitive Idea –“Two heads are better than one” –Produce multiple classifiers from one data set NB: same inducer (multiple instantiations) or different inducers may be used Differences in samples will “smooth out” sensitivity of L, H to D

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Stacked Generalization aka Stacking Intuitive Idea –Train multiple learners Each uses subsample of D May be ANN, decision tree, etc. –Train combiner on validation segment –See [Wolpert, 1992; Bishop, 1995] Stacked Generalization Network Stacked Generalization: Idea Combiner Inducer y x 11 x 12 y Predictions Combiner Inducer x 21 x 22 yy Predictions Combiner y yy Predictions

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Other Combiners So Far: Single-Pass Combiners –First, train each inducer –Then, train combiner on their output and evaluate based on criterion Weighted majority: training set accuracy Bagging: training set accuracy Stacking: validation set accuracy –Finally, apply combiner function to get new prediction algorithm (classfier) Weighted majority: weight coefficients (penalized based on mistakes) Bagging: voting committee of classifiers Stacking: validated hierarchy of classifiers with trained combiner inducer Next: Multi-Pass Combiners –Train inducers and combiner function(s) concurrently –Learn how to divide and balance learning problem across multiple inducers –Framework: mixture estimation

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Combining Classifiers –Problem definition and motivation: improving accuracy in concept learning –General framework: collection of weak classifiers to be improved (data fusion) Weighted Majority (WM) –Weighting system for collection of algorithms Weights each algorithm in proportion to its training set accuracy Use this weight in performance element (and on test set predictions) –Mistake bound for WM Bootstrap Aggregating (Bagging) –Voting system for collection of algorithms –Training set for each member: sampled with replacement –Works for unstable inducers Stacked Generalization (aka Stacking) –Hierarchical system for combining inducers (ANNs or other inducers) –Training sets for “leaves”: sampled with replacement; combiner: validation set Next: Boosting the Margin, Hierarchical Mixtures of Experts Single Pass Combiners

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Intuitive Idea –Another type of static committee machine: can be used to improve any inducer –Learn set of classifiers from D, but reweight examples to emphasize misclassified –Final classifier  weighted combination of classifiers Different from Ensemble Averaging –WM: all inducers trained on same D –Bagging, stacking: training/validation partitions, i.i.d. subsamples S[i] of D –Boosting: data sampled according to different distributions Problem Definition –Given: collection of multiple inducers, large data set or example stream –Return: combined predictor (trained committee machine) Solution Approaches –Filtering: use weak inducers in cascade to filter examples for downstream ones –Resampling: reuse data from D by subsampling (don’t need huge or “infinite” D) –Reweighting: reuse x  D, but measure error over weighted x Boosting: Idea

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Mixture Models: Idea Intuitive Idea –Integrate knowledge from multiple experts (or data from multiple sensors) Collection of inducers organized into committee machine (e.g., modular ANN) Dynamic structure: take input signal into account –References [Bishop, 1995] (Sections 2.7, 9.7) [Haykin, 1999] (Section 7.6) Problem Definition –Given: collection of inducers (“experts”) L, data set D –Perform: supervised learning using inducers and self-organization of experts –Return: committee machine with trained gating network (combiner inducer) Solution Approach –Let combiner inducer be generalized linear model (e.g., threshold gate) –Activation functions: linear combination, vote, “smoothed” vote (softmax)

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Mixture Models: Procedure Algorithm Combiner-Mixture-Model (D, L, Activation, k) –m  D.size –FOR j  1 TO k DO // initialization w[j]  1 –UNTIL the termination condition is met, DO FOR j  1 TO k DO P[j]  L[j].Update-Inducer (D)// single training step for L[j] FOR i  1 TO m DO Sum[i]  0 FOR j  1 TO k DO Sum[i] += P[j](D[i]) Net[i]  Compute-Activation (Sum[i])// compute g j  Net[i][j] FOR j  1 TO k DO w[j]  Update-Weights (w[j], Net[i], D[i]) –RETURN (Make-Predictor (P, w)) Update-Weights: Single Training Step for Mixing Coefficients

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Mixture Models: Properties Unspecified Functions –Update-Inducer Single training step for each expert module e.g., ANN: one backprop cycle, aka epoch –Compute-Activation Depends on ME architecture Idea: smoothing of “winner-take-all” (“hard” max) Softmax activation function (Gaussian mixture model) Possible Modifications –Batch (as opposed to online) updates: lift Update-Weights out of outer FOR loop –Classification learning (versus concept learning): multiple y j values –Arrange gating networks (combiner inducers) in hierarchy (HME) Gating Network x g1g1 g2g2 Expert Network y1y1 Expert Network y2y2 

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Generalized Linear Models (GLIMs) Recall: Perceptron (Linear Threshold Gate) Model Generalization of LTG Model [McCullagh and Nelder, 1989] –Model parameters: connection weights as for LTG –Representational power: depends on transfer (activation) function Activation Function –Type of mixture model depends (in part) on this definition –e.g., o(x) could be softmax (x · w) [Bridle, 1990] NB: softmax is computed across j = 1, 2, …, k (cf. “hard” max) Defines (multinomial) pdf over experts [Jordan and Jacobs, 1995] x1x1 x2x2 xnxn w1w1 w2w2 wnwn  x 0 = 1 w0w0

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Hierarchical Mixture of Experts (HME): Idea Hierarchical Model –Compare: stacked generalization network –Difference: trained in multiple passes Dynamic Network of GLIMs All examples x and targets y = c(x) identical Gating Network y y1y1 x g1g1 g2g2 y2y2 Gating Network Expert Network Expert Network y 11 x xx g 11 g 21 y 12 Gating Network Expert Network Expert Network x x g 22 g 12 x y 21 y 22

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Hierarchical Mixture of Experts (HME): Procedure Algorithm Combiner-HME (D, L, Activation, Level, k, Classes) –m  D.size –FOR j  1 TO k DO w[j]  1// initialization –UNTIL the termination condition is met DO IF Level > 1 THEN FOR j  1 TO k DO P[j]  Combiner-HME (D, L[j], Activation, Level - 1, k, Classes) ELSE FOR j  1 TO k DO P[j]  L[j].Update-Inducer (D) FOR i  1 TO m DO Sum[i]  0 FOR j  1 TO k DO Sum[i] += P[j](D[i]) Net[i]  Compute-Activation (Sum[i]) FOR l  1 TO Classes DO w[l]  Update-Weights (w[l], Net[i], D[i]) –RETURN (Make-Predictor (P, w))

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Hierarchical Mixture of Experts (HME): Properties Advantages –Benefits of ME: base case is single level of expert and gating networks –More combiner inducers  more capability to decompose complex problems Views of HME –Expresses divide-and-conquer strategy Problem is distributed across subtrees “on the fly” by combiner inducers Duality: data fusion  problem redistribution Recursive decomposition: until good fit found to “local” structure of D –Implements soft decision tree Mixture of experts: 1-level decision tree (decision stump) Information preservation compared to traditional (hard) decision tree Dynamics of HME improves on greedy (high-commitment) strategy of decision tree induction

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Training Methods for Hierarchical Mixture of Experts (HME) Stochastic Gradient Ascent –Maximize log-likelihood function L(  ) = lg P(D |  ) –Compute –Finds MAP values Expert network (leaf) weights w ij Gating network (interior node) weights at lower level (a ij ), upper level (a j ) Expectation-Maximization (EM) Algorithm –Recall definition Goal: maximize incomplete-data log-likelihood function L(  ) = lg P(D |  ) Estimation step: calculate E[unobserved variables |  ], assuming current  Maximization step: update  to maximize E[lg P(D |  )], D  all variables –Using EM: estimate with gating networks, then adjust   {w ij, a ij, a j }

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Methods for Combining Classifiers: Committee Machines Framework –Think of collection of trained inducers as committee of experts –Each produces predictions given input (s(D test ), i.e., new x) –Objective: combine predictions by vote (subsampled D train ), learned weighting function, or more complex combiner inducer (trained using D train or D validation ) Types of Committee Machines –Static structures: based only on y coming out of local inducers Single-pass, same data or independent subsamples: WM, bagging, stacking Cascade training: AdaBoost Iterative reweighting: boosting by reweighting –Dynamic structures: take x into account Mixture models (mixture of experts aka ME): one combiner (gating) level Hierarchical Mixtures of Experts (HME): multiple combiner (gating) levels Specialist-Moderator (SM) networks: partitions of x given to combiners

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Combining Classifiers –Weak classifiers: not guaranteed to do better than random guessing –Combiners: functions f: prediction vector  instance  prediction Single-Pass Combiners –Weighted Majority (WM) Weights prediction of each inducer according to its training-set accuracy Mistake bound: maximum number of mistakes before converging to correct h Incrementality: ability to update parameters without complete retraining –Bootstrap Aggregating (aka Bagging) Takes vote among multiple inducers trained on different samples of D Subsampling: drawing one sample from another (D ~ D) Unstable inducer: small change to D causes large change in h –Stacked Generalization (aka Stacking) Hierarchical combiner: can apply recursively to re-stack Trains combiner inducer using validation set Terminology [1]: Single-Pass Combiners

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Terminology [2]: Static and Dynamic Mixtures Committee Machines aka Combiners Static Structures –Ensemble averaging Single-pass, separately trained inducers, common input Individual outputs combined to get scalar output (e.g., linear combination) –Boosting the margin: separately trained inducers, different input distributions Filtering: feed examples to trained inducers (weak classifiers), pass on to next classifier iff conflict encountered (consensus model) Resampling: aka subsampling (S[i] of fixed size m’ resampled from D) Reweighting: fixed size S[i] containing weighted examples for inducer Dynamic Structures –Mixture of experts: training in combiner inducer (aka gating network) –Hierarchical mixtures of experts: hierarchy of inducers, combiners Mixture Model, aka Mixture of Experts (ME) –Expert (classification), gating (combiner) inducers (modules, “networks”) –Hierarchical Mixtures of Experts (HME): multiple combiner (gating) levels

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Summary Points Committee Machines aka Combiners Static Structures (Single-Pass) –Ensemble averaging For improving weak (especially unstable) classifiers e.g., weighted majority, bagging, stacking –Boosting the margin Improve performance of any inducer: weight examples to emphasize errors Variants: filtering (aka consensus), resampling (aka subsampling), reweighting Dynamic Structures (Multi-Pass) –Mixture of experts: training in combiner inducer (aka gating network) –Hierarchical mixtures of experts: hierarchy of inducers, combiners Mixture Model (aka Mixture of Experts) –Estimation of mixture coefficients (i.e., weights) –Hierarchical Mixtures of Experts (HME): multiple combiner (gating) levels Next Topic: Reasoning under Uncertainty (Probabilistic KDD)