Class #212 – Thursday, November 12

Slides:



Advertisements
Similar presentations
Support Vector Machine & Its Applications
Advertisements

Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Abhishek Sharma Dept. of EEE BIT Mesra Aug 16, 2010 Course: Neural Network Professor: Dr. B.M. Karan Semester.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
Support Vector Machines
1 Support Vector Machines Some slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here:
1 CSC 463 Fall 2010 Dr. Adam P. Anthony Class #27.
Support Vector Machine
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines
Support Vector Machines
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Based on: The Nature of Statistical Learning Theory by V. Vapnick 2009 Presentation by John DiMona and some slides based on lectures given by Professor.
Support Vector Machine & Image Classification Applications
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Copyright © 2001, Andrew W. Moore Support Vector Machines Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
Introduction to SVMs. SVMs Geometric –Maximizing Margin Kernel Methods –Making nonlinear decision boundaries linear –Efficiently! Capacity –Structural.
Data Mining Volinsky - Columbia University Topic 9: Advanced Classification Neural Networks Support Vector Machines 1 Credits: Shawndra Hill Andrew.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE: Support Vector Machines.
1 CSC 4510, Spring © Paula Matuszek CSC 4510 Support Vector Machines 2 (SVMs)
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Machine Learning CS 165B Spring Course outline Introduction (Ch. 1) Concept learning (Ch. 2) Decision trees (Ch. 3) Ensemble learning Neural Networks.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Spring 2008 Jinze Liu.
1 CMSC 671 Fall 2010 Class #24 – Wednesday, November 24.
1 Support Vector Machines Chapter Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School.
1 Support Vector Machines. Why SVM? Very popular machine learning technique –Became popular in the late 90s (Vapnik 1995; 1998) –Invented in the late.
1 CSC 4510, Spring © Paula Matuszek CSC 4510 Support Vector Machines (SVMs)
Machine Learning Lecture 7: SVM Moshe Koppel Slides adapted from Andrew Moore Copyright © 2001, 2003, Andrew W. Moore.
SVM – Support Vector Machines Presented By: Bella Specktor.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
Dec 21, 2006For ICDM Panel on 10 Best Algorithms Support Vector Machines: A Survey Qiang Yang, for ICDM 2006 Panel Partially.
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
1 Support Vector Machines Some slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here:
Support Vector Machines Louis Oliphant Cs540 section 2.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Support Vector Machines Chapter 18.9 and the paper “Support vector machines” by M. Hearst, ed., 1998 Acknowledgments: These slides combine and modify ones.
Non-separable SVM's, and non-linear classification using kernels Jakob Verbeek December 16, 2011 Course website:
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines
CS 9633 Machine Learning Support Vector Machines
Support Vector Machines
Support Vector Machines
Introduction to SVMs.
Geometrical intuition behind the dual problem
An Introduction to Support Vector Machines
Support Vector Machines
Machine Learning Week 2.
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Support Vector Machines
CS 2750: Machine Learning Support Vector Machines
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 15
Introduction to Support Vector Machines
CSSE463: Image Recognition Day 15
CSSE463: Image Recognition Day 14
CS 485: Special Topics in Data Mining Jinze Liu
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 15
Support Vector Machines
Support Vector Machines
SVMs for Document Ranking
Presentation transcript:

Class #212 – Thursday, November 12 CMSC 471 Fall 2009 Class #212 – Thursday, November 12

Machine Learning III Chapter 20.5—20.8

Today’s class Support vector machines Neural networks Clustering (unsupervised learning)

Support Vector Machines These SVM slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.

Linear Classifiers f  yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f  yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f  yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f  yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f  yest x f(x,w,b) = sign(w. x - b) denotes +1 Any of these would be fine.. ..but which is best? Copyright © 2001, 2003, Andrew W. Moore

Classifier Margin f  yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint. Copyright © 2001, 2003, Andrew W. Moore

Maximum Margin f  yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Linear SVM Copyright © 2001, 2003, Andrew W. Moore

Maximum Margin f  yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM Copyright © 2001, 2003, Andrew W. Moore

Why Maximum Margin? Intuitively this feels safest. If we’ve made a small error in the location of the boundary (it’s been jolted in its perpendicular direction) this gives us least chance of causing a misclassification. LOOCV is easy since the model is immune to removal of any non-support- vector datapoints. There’s some theory (using VC dimension) that is related to (but not the same as) the proposition that this is a good thing. Empirically it works very very well. f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Copyright © 2001, 2003, Andrew W. Moore

Specifying a line and margin Plus-Plane “Predict Class = +1” zone Classifier Boundary Minus-Plane “Predict Class = -1” zone How do we represent this mathematically? …in m input dimensions? Copyright © 2001, 2003, Andrew W. Moore

Specifying a line and margin Plus-Plane “Predict Class = +1” zone Classifier Boundary Minus-Plane “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } Classify as.. +1 if w . x + b >= 1 -1 w . x + b <= -1 Universe explodes -1 < w . x + b < 1 Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width M = Margin Width “Predict Class = +1” zone How do we compute M in terms of w and b? “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } Claim: The vector w is perpendicular to the Plus Plane. Why? Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width M = Margin Width “Predict Class = +1” zone How do we compute M in terms of w and b? “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } Claim: The vector w is perpendicular to the Plus Plane. Why? Let u and v be two vectors on the Plus Plane. What is w . ( u – v ) ? And so of course the vector w is also perpendicular to the Minus Plane Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width “Predict Class = +1” zone How do we compute M in terms of w and b? x- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } The vector w is perpendicular to the Plus Plane Let x- be any point on the minus plane Let x+ be the closest plus-plane-point to x-. Any location in m: not necessarily a datapoint Any location in Rm: not necessarily a datapoint Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width “Predict Class = +1” zone How do we compute M in terms of w and b? x- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } The vector w is perpendicular to the Plus Plane Let x- be any point on the minus plane Let x+ be the closest plus-plane-point to x-. Claim: x+ = x- +  w for some value of . Why? Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width “Predict Class = +1” zone The line from x- to x+ is perpendicular to the planes. So to get from x- to x+ travel some distance in direction w. How do we compute M in terms of w and b? x- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Plus-plane = { x : w . x + b = +1 } Minus-plane = { x : w . x + b = -1 } The vector w is perpendicular to the Plus Plane Let x- be any point on the minus plane Let x+ be the closest plus-plane-point to x-. Claim: x+ = x- +  w for some value of . Why? Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width “Predict Class = +1” zone x- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 What we know: w . x+ + b = +1 w . x- + b = -1 x+ = x- +  w |x+ - x- | = M It’s now easy to get M in terms of w and b Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width “Predict Class = +1” zone x- “Predict Class = -1” zone wx+b=1 w . (x - + w) + b = 1 => w . x - + b + w .w = 1 -1 + w .w = 1 wx+b=0 wx+b=-1 What we know: w . x+ + b = +1 w . x- + b = -1 x+ = x- +  w |x+ - x- | = M It’s now easy to get M in terms of w and b Copyright © 2001, 2003, Andrew W. Moore

Computing the margin width x+ M = Margin Width = “Predict Class = +1” zone x- “Predict Class = -1” zone wx+b=1 M = |x+ - x- | =| w |= wx+b=0 wx+b=-1 What we know: w . x+ + b = +1 w . x- + b = -1 x+ = x- +  w |x+ - x- | = M Copyright © 2001, 2003, Andrew W. Moore

Learning the Maximum Margin Classifier M = Margin Width = “Predict Class = +1” zone x- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1 Given a guess of w and b we can Compute whether all data points in the correct half-planes Compute the width of the margin So now we just need to write a program to search the space of w’s and b’s to find the widest margin that matches all the datapoints. How? Gradient descent? Simulated Annealing? Matrix Inversion? EM? Newton’s Method? Copyright © 2001, 2003, Andrew W. Moore

Learning SVMs Trick #1: Just find the points that would be closest to the optimal separating plane (the “support vectors”) and work directly from those instances. Trick #2: Represent as a quadratic optimization problem, and use quadratic programming techniques. Trick #3 (the “kernel trick”): Instead of just using the features, represent the data using a high-dimensional feature space constructed from a set of basis functions (polynomial and Gaussian combinations of the base features are the most common). Then find a separating plane / SVM in that high-dimensional space Voila: A nonlinear classifier!

Common SVM basis functions zk = ( polynomial terms of xk of degree 1 to q ) zk = ( radial basis functions of xk ) zk = ( sigmoid functions of xk ) Copyright © 2001, 2003, Andrew W. Moore

SVM Performance Anecdotally they work very very well indeed. Example: They are currently the best-known classifier on a well-studied hand-written-character recognition benchmark Another Example: Andrew knows several reliable people doing practical real-world work who claim that SVMs have saved them when their other favorite classifiers did poorly. There is a lot of excitement and religious fervor about SVMs as of 2001. Despite this, some practitioners (including your lecturer) are a little skeptical. Copyright © 2001, 2003, Andrew W. Moore

Unsupervised Learning: Clustering

Unsupervised Learning Learn without a “supervisor” who labels instances Clustering Scientific discovery Pattern discovery Associative learning Clustering: Given a set of instances without labels, partition them such that each instance is: similar to other instances in its partition (inter-cluster similarity) dissimilar from instances in other partitions (intra-cluster dissimilarity)

Clustering Techniques Agglomerative clustering Single-link clustering Complete-link clustering Average-link clustering Partitional clustering k-means clustering Spectral clustering