Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
also known as the “Perceptron”
Data Mining Classification: Alternative Techniques
Data Mining Classification: Alternative Techniques
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Artificial Neural Networks
Computer vision: models, learning and inference
Face detection Many slides adapted from P. Viola.
Machine Learning Neural Networks
Overview over different methods – Supervised Learning
The Viola/Jones Face Detector (2001)
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Robust Real-Time Object Detection Paul Viola & Michael Jones.
Artificial Neural Networks
Artificial Neural Networks
Machine Learning – Classifiers and Boosting
CS 484 – Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function Networks
Face Detection CSE 576. Face detection State-of-the-art face detection demo (Courtesy Boris Babenko)Boris Babenko.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Face Detection using the Viola-Jones Method
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Artificial Neural Networks
SVM by Sequential Minimal Optimization (SMO)
Machine Learning Chapter 4. Artificial Neural Networks
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Machine Learning – Classifiers and Boosting Reading Ch ,
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Lecture 09 03/01/2012 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
EEE502 Pattern Recognition
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Machine Learning – Classifiers and Boosting Reading Ch , (3 rd ed.) Ch , but not the part of 20.3 after “Learning Bayesian.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Chapter 6 Neural Network.
1 Learning Bias & Clustering Louis Oliphant CS based on slides by Burr H. Settles.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Today’s Lecture Neural networks Training
Fall 2004 Backpropagation CS478 - Machine Learning.
Artificial Neural Networks
Machine Learning – Linear Classifiers and Boosting
Learning with Perceptrons and Neural Networks
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Classification with Perceptrons Reading:
Machine Learning Today: Reading: Maria Florina Balcan
Perceptron as one Type of Linear Discriminants
Machine Learning – Classifiers and Boosting
Computer Vision Lecture 19: Object Recognition III
A task of induction to find patterns
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 2 CS 271, Fall 2007: Professor Padhraic Smyth Final Exam: Thursday 1:30, this classroom Same format as midterm Topics: –Search: , , (no CSPs) –Propositional Logic: all of Ch 7 except FC, BC, and DPLL –First-Order Logic all of Ch 8, and 9.1, 9.2 and 9.5 –Uncertainty and Bayesian Networks All of chapter , except for subsections on continuous variables –Machine Learning Decision trees and boosting: Perceptrons: Ch 20: pages –Plus all material in class slides and homeworks related to the topics above – but not including face detection material in today’s slides

Topic 12: Machine Learning – Part 2: 3 CS 271, Fall 2007: Professor Padhraic Smyth Outline Different types of learning problems Different types of learning algorithms Supervised learning –Decision trees –Naïve Bayes –Perceptrons, Multi-layer Neural Networks –Boosting Applications: learning to detect faces in images Reading for today’s lecture: –Chapter 18.1 to 18.4 (inclusive) plus pages

Topic 12: Machine Learning – Part 2: 4 CS 271, Fall 2007: Professor Padhraic Smyth Training Data for Supervised Learning

Topic 12: Machine Learning – Part 2: 5 CS 271, Fall 2007: Professor Padhraic Smyth Classification Problem with Overlap

Topic 12: Machine Learning – Part 2: 6 CS 271, Fall 2007: Professor Padhraic Smyth Inductive learning Let x represent the input vector of attributes –x j is the ith component of the vector x –value of the jth attribute, j = 1,…d Let f(x) represent the value of the target variable for x –The implicit mapping from x to f(x) is unknown to us –We just have training data pairs, D = {x, f(x)} available We want to learn a mapping from x to f, i.e., h(x; ) is “close” to f(x) for all training data points x  are the parameters of our predictor h(..) Examples: –h(x; ) = sign(w 1 x 1 + w 2 x 2 + w 3 ) –h k (x) = (x1 OR x2) AND (x3 OR NOT(x4))

Topic 12: Machine Learning – Part 2: 7 CS 271, Fall 2007: Professor Padhraic Smyth Decision Boundaries Decision Boundary Decision Region 1 Decision Region 2

Topic 12: Machine Learning – Part 2: 8 CS 271, Fall 2007: Professor Padhraic Smyth Classification in Euclidean Space A classifier is a partition of the space x into disjoint decision regions –Each region has a label attached –Regions with the same label need not be contiguous –For a new test point, find what decision region it is in, and predict the corresponding label Decision boundaries = boundaries between decision regions –The “dual representation” of decision regions We can characterize a classifier by the equations for its decision boundaries Learning a classifier  searching for the decision boundaries that optimize our objective function

Topic 12: Machine Learning – Part 2: 9 CS 271, Fall 2007: Professor Padhraic Smyth Example: Decision Trees When applied to real-valued attributes, decision trees produce “axis-parallel” linear decision boundaries Each internal node is a binary threshold of the form x j > t ? converts each real-valued feature into a binary one requires evaluation of N-1 possible threshold locations for N data points, for each real-valued attribute, for each internal node

Topic 12: Machine Learning – Part 2: 10 CS 271, Fall 2007: Professor Padhraic Smyth Decision Tree Example Income Debt

Topic 12: Machine Learning – Part 2: 11 CS 271, Fall 2007: Professor Padhraic Smyth Decision Tree Example t1 Income Debt Income > t1 ??

Topic 12: Machine Learning – Part 2: 12 CS 271, Fall 2007: Professor Padhraic Smyth Decision Tree Example t1 t2 Income Debt Income > t1 Debt > t2 ??

Topic 12: Machine Learning – Part 2: 13 CS 271, Fall 2007: Professor Padhraic Smyth Decision Tree Example t1t3 t2 Income Debt Income > t1 Debt > t2 Income > t3

Topic 12: Machine Learning – Part 2: 14 CS 271, Fall 2007: Professor Padhraic Smyth Decision Tree Example t1t3 t2 Income Debt Income > t1 Debt > t2 Income > t3 Note: tree boundaries are linear and axis-parallel

Topic 12: Machine Learning – Part 2: 15 CS 271, Fall 2007: Professor Padhraic Smyth A Simple Classifier: Minimum Distance Classifier Training –Separate training vectors by class –Compute the mean for each class,  k, k = 1,… m Prediction –Compute the closest mean to a test vector x’ (using Euclidean distance) –Predict the corresponding class In the 2-class case, the decision boundary is defined by the locus of the hyperplane that is halfway between the 2 means and is orthogonal to the line connecting them This is a very simple-minded classifier – easy to think of cases where it will not work very well

Topic 12: Machine Learning – Part 2: 16 CS 271, Fall 2007: Professor Padhraic Smyth Minimum Distance Classifier

Topic 12: Machine Learning – Part 2: 17 CS 271, Fall 2007: Professor Padhraic Smyth Another Example: Nearest Neighbor Classifier The nearest-neighbor classifier –Given a test point x’, compute the distance between x’ and each input data point –Find the closest neighbor in the training data –Assign x’ the class label of this neighbor –(sort of generalizes minimum distance classifier to exemplars) If Euclidean distance is used as the distance measure (the most common choice), the nearest neighbor classifier results in piecewise linear decision boundaries Many extensions –e.g., kNN, vote based on k-nearest neighbors –k can be chosen by cross-validation

Topic 12: Machine Learning – Part 2: 18 CS 271, Fall 2007: Professor Padhraic Smyth Local Decision Boundaries Feature 1 Feature 2 ? Boundary? Points that are equidistant between points of class 1 and 2 Note: locally the boundary is linear

Topic 12: Machine Learning – Part 2: 19 CS 271, Fall 2007: Professor Padhraic Smyth Finding the Decision Boundaries Feature 1 Feature 2 ?

Topic 12: Machine Learning – Part 2: 20 CS 271, Fall 2007: Professor Padhraic Smyth Finding the Decision Boundaries Feature 1 Feature 2 ?

Topic 12: Machine Learning – Part 2: 21 CS 271, Fall 2007: Professor Padhraic Smyth Finding the Decision Boundaries Feature 1 Feature 2 ?

Topic 12: Machine Learning – Part 2: 22 CS 271, Fall 2007: Professor Padhraic Smyth Overall Boundary = Piecewise Linear Feature 1 Feature 2 ? Decision Region for Class 1 Decision Region for Class 2

Topic 12: Machine Learning – Part 2: 23 CS 271, Fall 2007: Professor Padhraic Smyth Nearest-Neighbor Boundaries on this data set?

Topic 12: Machine Learning – Part 2: 24 CS 271, Fall 2007: Professor Padhraic Smyth Linear Classifiers Linear classifier  single linear decision boundary (for 2-class case) We can always represent a linear decision boundary by a linear equation: w 1 x 1 + w 2 x 2 + w d x d =  w j x j = w t x = 0 In d dimensions, this defines a (d-1) dimensional hyperplane –d=3, we get a plane; d=2, we get a line For prediction we simply see if  w j x j > 0 The w i are the weights (parameters) –Learning consists of searching in the d-dimensional weight space for the set of weights (the linear boundary) that minimizes an error measure Note that a minimum distance classifier is a special (restricted) case of a linear classifier

Topic 12: Machine Learning – Part 2: 25 CS 271, Fall 2007: Professor Padhraic Smyth

Topic 12: Machine Learning – Part 2: 26 CS 271, Fall 2007: Professor Padhraic Smyth

Topic 12: Machine Learning – Part 2: 27 CS 271, Fall 2007: Professor Padhraic Smyth

Topic 12: Machine Learning – Part 2: 28 CS 271, Fall 2007: Professor Padhraic Smyth The Perceptron Classifier (pages in text) The perceptron classifier is just another name for a linear classifier for 2-class data, i.e., output(x) = sign(  w j x j ) Loosely motivated by a simple model of how neurons fire For mathematical convenience, class labels are +1 for one class and -1 for the other Two major types of algorithms for training perceptrons –Objective function = classification accuracy (“error correcting”) –Objective function = squared error (use gradient descent) –Gradient descent is generally faster and more efficient – but there is a problem!

Topic 12: Machine Learning – Part 2: 29 CS 271, Fall 2007: Professor Padhraic Smyth The Sigmoid Function Sigmoid function is defined as [ f ] = [ 2 / ( 1 + exp[- f ] ) ] - 1 Derivative of sigmoid f [ f ] =.5 * ( [f]+1 ) * ( 1-[f] ) f  f)

Topic 12: Machine Learning – Part 2: 30 CS 271, Fall 2007: Professor Padhraic Smyth Two different types of perceptron output o(f) f x-axis below is f(x) = f = weighted sum of inputs y-axis is the perceptron output  f) Thresholded output, takes values +1 or -1 Sigmoid output, takes real values between -1 and +1 The sigmoid is in effect an approximation to the threshold function above, but has a gradient that we can use for learning f

Topic 12: Machine Learning – Part 2: 31 CS 271, Fall 2007: Professor Padhraic Smyth Squared Error for Perceptron with Sigmoidal Output Squared error = E[w] =  i [ (f[x(i)]) - t(i) ] 2 where x(i) is the ith input vector in the training data, i=1,..N t(i) is the ith target value (-1 or 1) f[x(i)] =  w j x j is the weighted sum of inputs (f[x(i)]) is the sigmoid of the weighted sum Note that everything is fixed (once we have the training data) except for the weights w So we want to minimize E[w] as a function of w

Topic 12: Machine Learning – Part 2: 32 CS 271, Fall 2007: Professor Padhraic Smyth Gradient Descent Learning of Weights Gradient Descent Rule: w new = w old -  ( E[w] ) where  (E[w]) is the gradient of the error function E wrt weights, and  is the learning rate (small, positive) Notes: 1. This moves us downhill in direction  ( E [ w ] ) (steepest downhill) 2. How far we go is determined by the value of 

Topic 12: Machine Learning – Part 2: 33 CS 271, Fall 2007: Professor Padhraic Smyth Gradient Descent Update Equation From basic calculus, for perceptron with sigmoid, and squared error objective function, gradient for a single input x(i) is  ( E[w] ) = - ( t(i) – [f(i)] ) [f(i)] x j (i) Gradient descent weight update rule: w j = w j + ( t(i) – [f(i)] ) [f(i)] x j (i) – can rewrite as:  w j = w j + * error * c * x j (i)

Topic 12: Machine Learning – Part 2: 34 CS 271, Fall 2007: Professor Padhraic Smyth Pseudo-code for Perceptron Training Inputs: N features, N targets (class labels), learning rate  Outputs: a set of learned weights Initialize each w j (e.g.,randomly) While (termination condition not satisfied) for i = 1: N % loop over data points (an iteration) for j= 1 : d % loop over weights deltawj =  ( t(i) –  [f(i)] )  [f(i)] x j (i) w j = w j + deltawj end calculate termination condition end

Topic 12: Machine Learning – Part 2: 35 CS 271, Fall 2007: Professor Padhraic Smyth Comments on Perceptron Learning Iteration = one pass through all of the data Algorithm presented = incremental gradient descent –Weights are updated after visiting each input example –Alternatives Batch: update weights after each iteration (typically slower) Stochastic: randomly select examples and then do weight updates (see text, p. 742) Rate of convergence –E[w] is convex as a function of w, so no local minima –So convergence is guaranteed as long as learning rate is small enough But if we make it too small, learning will be *very* slow –But if learning rate is too large, we move further, but can overshoot the solution and oscillate, and not converge at all

Topic 12: Machine Learning – Part 2: 36 CS 271, Fall 2007: Professor Padhraic Smyth Multi-Layer Perceptrons (p in text) What if we took K perceptrons and trained them in parallel and then took a weighted sum of their sigmoidal outputs? –This is a multi-layer neural network with a single “hidden” layer (the outputs of the first set of perceptrons) –If we train them jointly in parallel, then intuitively different perceptrons could learn different parts of the solution Mathematically, they define different local decision boundaries in the input space, giving us a more powerful model How would we train such a model? –Backpropagation algorithm = clever way to do gradient descent –Bad news: many local minima and many parameters training is hard and slow –Neural networks generated much excitement in AI research in the late 1980’s and 1990’s But now techniques like boosting and support vector machines are often preferred

Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P. Viola and M. Jones, International Journal of Computer Vision, 2004

Topic 12: Machine Learning – Part 2: 38 CS 271, Fall 2007: Professor Padhraic Smyth Viola-Jones Face Detection Algorithm Overview : –Viola Jones technique overview –Features –Integral Images –Feature Extraction –Weak Classifiers –Boosting and classifier evaluation –Cascade of boosted classifiers –Example Results

Topic 12: Machine Learning – Part 2: 39 CS 271, Fall 2007: Professor Padhraic Smyth Viola Jones Technique Overview Three major contributions/phases of the algorithm : –Feature extraction –Learning using boosting and decision stumps –Multi-scale detection algorithm Feature extraction and feature evaluation. –Rectangular features are used, with a new image representation their calculation is very fast. Classifier learning using a method called boosting A combination of simple classifiers is very effective

Topic 12: Machine Learning – Part 2: 40 CS 271, Fall 2007: Professor Padhraic Smyth Features Four basic types. –They are easy to calculate. –The white areas are subtracted from the black ones. –A special representation of the sample called the integral image makes feature extraction faster.

Topic 12: Machine Learning – Part 2: 41 CS 271, Fall 2007: Professor Padhraic Smyth Integral images Summed area tables A representation that means any rectangle’s values can be calculated in four accesses of the integral image.

Topic 12: Machine Learning – Part 2: 42 CS 271, Fall 2007: Professor Padhraic Smyth Fast Computation of Pixel Sums

Topic 12: Machine Learning – Part 2: 43 CS 271, Fall 2007: Professor Padhraic Smyth Feature Extraction Features are extracted from sub windows of a sample image. –The base size for a sub window is 24 by 24 pixels. –Each of the four feature types are scaled and shifted across all possible combinations In a 24 pixel by 24 pixel sub window there are ~160,000 possible features to be calculated.

Topic 12: Machine Learning – Part 2: 44 CS 271, Fall 2007: Professor Padhraic Smyth Learning with many features We have 160,000 features – how can we learn a classifier with only a few hundred training examples without overfitting? Idea: –Learn a single very simple classifier (a “weak classifier”) –Classify the data –Look at where it makes errors –Reweight the data so that the inputs where we made errors get higher weight in the learning process –Now learn a 2 nd simple classifier on the weighted data –Combine the 1 st and 2 nd classifier and weight the data according to where they make errors –Learn a 3 rd classifier on the weighted data –… and so on until we learn T simple classifiers –Final classifier is the combination of all T classifiers –This procedure is called “Boosting” – works very well in practice.

Topic 12: Machine Learning – Part 2: 45 CS 271, Fall 2007: Professor Padhraic Smyth “Decision Stumps” Decision stumps = decision tree with only a single root node –Certainly a very weak learner! –Say the attributes are real-valued –Decision stump algorithm looks at all possible thresholds for each attribute –Selects the one with the max information gain –Resulting classifier is a simple threshold on a single feature Outputs a +1 if the attribute is above a certain threshold Outputs a -1 if the attribute is below the threshold –Note: can restrict the search for to the n-1 “midpoint” locations between a sorted list of attribute values for each feature. So complexity is n log n per attribute. –Note this is exactly equivalent to learning a perceptron with a single intercept term (so we could also learn these stumps via gradient descent and mean squared error)

Topic 12: Machine Learning – Part 2: 46 CS 271, Fall 2007: Professor Padhraic Smyth Boosting Example

Topic 12: Machine Learning – Part 2: 47 CS 271, Fall 2007: Professor Padhraic Smyth First classifier

Topic 12: Machine Learning – Part 2: 48 CS 271, Fall 2007: Professor Padhraic Smyth First 2 classifiers

Topic 12: Machine Learning – Part 2: 49 CS 271, Fall 2007: Professor Padhraic Smyth First 3 classifiers

Topic 12: Machine Learning – Part 2: 50 CS 271, Fall 2007: Professor Padhraic Smyth Final Classifier learned by Boosting

Topic 12: Machine Learning – Part 2: 51 CS 271, Fall 2007: Professor Padhraic Smyth Final Classifier learned by Boosting

Topic 12: Machine Learning – Part 2: 52 CS 271, Fall 2007: Professor Padhraic Smyth Boosting with Decision Stumps Viola-Jones algorithm –With K attributes (e.g., K = 160,000) we have 160,000 different decision stumps to choose from –At each stage of boosting given reweighted data from previous stage Train all K (160,000) single-feature perceptrons Select the single best classifier at this stage Combine it with the other previously selected classifiers Reweight the data Learn all K classifiers again, select the best, combine, reweight Repeat until you have T classifiers selected –Very computationally intensive Learning K decision stumps T times E.g., K = 160,000 and T = 1000

Topic 12: Machine Learning – Part 2: 53 CS 271, Fall 2007: Professor Padhraic Smyth How is classifier combining done? At each stage we select the best classifier on the current iteration and combine it with the set of classifiers learned so far How are the classifiers combined? –Take the weight*feature for each classifier, sum these up, and compare to a threshold (very simple) –Boosting algorithm automatically provides the appropriate weight for each classifier and the threshold –This version of boosting is known as the AdaBoost algorithm –Some nice mathematical theory shows that it is in fact a very powerful machine learning technique

Topic 12: Machine Learning – Part 2: 54 CS 271, Fall 2007: Professor Padhraic Smyth Reduction in Error as Boosting adds Classifiers

Topic 12: Machine Learning – Part 2: 55 CS 271, Fall 2007: Professor Padhraic Smyth Useful Features Learned by Boosting

Topic 12: Machine Learning – Part 2: 56 CS 271, Fall 2007: Professor Padhraic Smyth A Cascade of Classifiers

Topic 12: Machine Learning – Part 2: 57 CS 271, Fall 2007: Professor Padhraic Smyth Detection in Real Images Basic classifier operates on 24 x 24 subwindows Scaling: –Scale the detector (rather than the images) –Features can easily be evaluated at any scale –Scale by factors of 1.25 Location: –Move detector around the image (e.g., 1 pixel increments) Final Detections –A real face may result in multiple nearby detections –Postprocess detected subwindows to combine overlapping detections into a single detection

Topic 12: Machine Learning – Part 2: 58 CS 271, Fall 2007: Professor Padhraic Smyth Training Examples of 24x24 images with faces

Topic 12: Machine Learning – Part 2: 59 CS 271, Fall 2007: Professor Padhraic Smyth Small set of 111 Training Images

Topic 12: Machine Learning – Part 2: 60 CS 271, Fall 2007: Professor Padhraic Smyth Sample results using the Viola-Jones Detector Notice detection at multiple scales

Topic 12: Machine Learning – Part 2: 61 CS 271, Fall 2007: Professor Padhraic Smyth More Detection Examples

Topic 12: Machine Learning – Part 2: 62 CS 271, Fall 2007: Professor Padhraic Smyth Practical implementation Details discussed in Viola-Jones paper Training time = weeks (with 5k faces and 9.5k non-faces) Final detector has 38 layers in the cascade, 6060 features 700 Mhz processor: –Can process a 384 x 288 image in seconds (in 2003 when paper was written)

Topic 12: Machine Learning – Part 2: 63 CS 271, Fall 2007: Professor Padhraic Smyth Summary Learning –Given a training data set, a class of models, and an error function, this is essentially a search or optimization problem Different approaches to learning –Divide-and-conquer: decision trees –Global decision boundary learning: perceptrons –Constructing classifiers incrementally: boosting Learning to recognize faces –Viola-Jones algorithm: state-of-the-art face detector, entirely learned from data, using boosting+decision-stumps