Sparse Kernel Machines

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Introduction to Support Vector Machines (SVM)
Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
CSI :Florida A BAYESIAN APPROACH TO LOCALIZED MULTI-KERNEL LEARNING USING THE RELEVANCE VECTOR MACHINE R. Close, J. Wilson, P. Gader.
Lecture 9 Support Vector Machines
ECG Signal processing (2)
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning
An Introduction of Support Vector Machine
Support Vector Machines
SVM—Support Vector Machines
Machine learning continued Image source:
Computer vision: models, learning and inference Chapter 8 Regression.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Groundwater 3D Geological Modeling: Solving as Classification Problem with Support Vector Machine A. Smirnoff, E. Boisvert, S. J.Paradis Earth Sciences.
Support Vector Machine
Pattern Recognition and Machine Learning
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Data mining and statistical learning - lecture 13 Separating hyperplane.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Statistical Learning Theory: Classification Using Support Vector Machines John DiMona Some slides based on Prof Andrew Moore at CMU:
An Introduction to Support Vector Machines Martin Law.
PATTERN RECOGNITION AND MACHINE LEARNING
Efficient Model Selection for Support Vector Machines
Support Vector Machine & Image Classification Applications
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
Machine Learning Using Support Vector Machines (Paper Review) Presented to: Prof. Dr. Mohamed Batouche Prepared By: Asma B. Al-Saleh Amani A. Al-Ajlan.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
An Introduction to Support Vector Machines (M. Law)
Christopher M. Bishop, Pattern Recognition and Machine Learning.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
Ohad Hageby IDC Support Vector Machines & Kernel Machines IP Seminar 2008 IDC Herzliya.
Some Aspects of Bayesian Approach to Model Selection Vetrov Dmitry Dorodnicyn Computing Centre of RAS, Moscow.
Biointelligence Laboratory, Seoul National University
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
CS 9633 Machine Learning Support Vector Machines
PREDICT 422: Practical Machine Learning
CEE 6410 Water Resources Systems Analysis
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Computer vision: models, learning and inference
LECTURE 16: SUPPORT VECTOR MACHINES
Support Vector Machines
An Introduction to Support Vector Machines
An Introduction to Support Vector Machines
CS 2750: Machine Learning Support Vector Machines
Pattern Recognition and Machine Learning
LECTURE 17: SUPPORT VECTOR MACHINES
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
SVMs for Document Ranking
Support Vector Machines 2
Presentation transcript:

Sparse Kernel Machines Christopher M. Bishop, Pattern Recognition and Machine Learning

Outline Introduction to kernel methods Support vector machines (SVM) Relevance vector machines (RVM) Applications Conclusions

Supervised Learning In machine learning, applications in which the training data comprises examples of the input vectors along with their corresponding target vectors are called supervised learning (x,t) (1,60,pass) (2,53,fail) (3,77,pass) (4,34,fail) ﹕ y(x) output

Classification x2 y<0 y>0 y=0 t=-1 t=1 x1

Regression t 1 -1 x new x 1

Linear Models Linear models for regression and classification: if we apply feature extraction, model parameter input

Problems with Feature Space Why feature extraction? Working in high dimensional feature spaces solves the problem of expressing complex functions Problems: - there is a computational problem (working with very large vectors) - curse of dimensionality

Kernel Methods (1) Kernel function: inner products in some feature space  nonlinear similarity measure Examples - polynomial: - Gaussian:

Kernel Methods (2) Many linear models can be reformulated using a “dual representation” where the kernel functions arise naturally  only require inner products between data (input)

Kernel Methods (3) We can benefit from the kernel trick: - choosing a kernel function is equivalent to choosing φ  no need to specify what features are being used - We can save computation by not explicitly mapping the data to feature space, but just working out the inner product in the data space

Kernel Methods (4) Kernel methods exploit information about the inner products between data items We can construct kernels indirectly by choosing a feature space mapping φ, or directly choose a valid kernel function If a bad kernel function is chosen, it will map to a space with many irrelevant features, so we need some prior knowledge of the target

Kernel Methods (5) Two basic modules for kernel methods General purpose learning model Problem specific kernel function

Kernel Methods (6) Limitation: the kernel function k(xn,xm) must be evaluated for all possible pairs xn and xm of training points when making predictions for new data points Sparse kernel machine makes prediction only by a subset of the training data points

Outline Introduction to kernel methods Support vector machines (SVM) Relevance vector machines (RVM) Applications Conclusions

Support Vector Machines (1) Support Vector Machines are a system for efficiently training the linear machines in the kernel-induced feature spaces while respecting the insights provided by the generalization theory and exploiting the optimization theory Generalization theory describes how to control the learning machines to prevent them from overfitting Generalize theory 是說我們要怎麼設計learning machine(尤其在在kernel induced的feature space) 使他們在generalize時避免overfitting

Support Vector Machines (2) To avoid overfitting, SVM modify the error function to a “regularized form” where hyperparameter λ balances the trade-off The aim of EW is to limit the estimated functions to smooth functions As a side effect, SVM obtain a sparse model

Support Vector Machines (3) Fig. 1 Architecture of SVM

SVM for Classification (1) The mechanism to prevent overfitting in classification is “maximum margin classifiers” SVM is fundamentally a two-class classifier

Maximum Margin Classifiers (1) The aim of classification is to find a D-1 dimension hyperplane to classify data in a D dimension space 2D example:

Maximum Margin Classifiers (2) support vectors support vectors margin

Maximum Margin Classifiers (3) small margin large margin

Maximum Margin Classifiers (4) Intuitively it is a “robust” solution - If we’ve made a small error in the location of the boundary, this gives us least chance of causing a misclassification The concept of max margin is usually justified using Vapnik’s Statistical learning theory Empirically it works well

SVM for Classification (2) After the optimization process, we obtain the prediction model: where (xn,tn) are N training data we can find that an will be zero except for that of the support vectors  sparse

SVM for Classification (3) Fig. 2 data from twp classes in two dimensions showing contours of constant y(x) obtained from a SVM having a Gaussian kernel function

SVM for Classification (4) For overlapping class distributions, SVM allow some of the training points to be misclassified  soft margin penalty Exact separation will lead to poor generalization 原本的方法中 分類錯誤是penalty無窮大 現在使penalty和distance有關

SVM for Classification (5) For multiclass problems, there are some methods to combine multiple two-class SVMs - one versus the rest - one versus one  more training time One vs the rest: 一個跟其他的分類 其他問題: different classifiers were traind on diffe$rent tasks Fig. 3 Problems in multiclass classification using multiple SVMs

SVM for Regression (1) For regression problems, the mechanism to prevent overfitting is “ε-insensitive error function” quadratic error function ε-insensitive error funciton

SVM for Regression (2) × Error = |y(x)-t|- ε No error Fig . 4 ε-tube

SVM for Regression (3) After the optimization process, we obtain the prediction model: we can find that an will be zero except for that of the support vectors  sparse

SVM for Regression (4) Fig . 5 Regression results. Support vectors are line on the boundary of the tube or outside the tube

Disadvantages It’s not sparse enough since the number of support vectors required typically grows linearly with the size of the training set Predictions are not probabilistic The estimation of error/margin trade-off parameters must utilize cross-validation which is a waste of computation Kernel functions are limited Multiclass classification problems . In regression the SVM outputs a point estimate, and in classification, a hard binary decision

Outline Introduction to kernel methods Support vector machines (SVM) Relevance vector machines (RVM) Applications Conclusions

Relevance Vector Machines (1) The relevance vector machine (RVM) is a Bayesian sparse kernel technique that shares many of the characteristics of SVM whilst avoiding its principal limitations RVM are based on Bayesian formulation and provides posterior probabilistic outputs, as well as having much sparser solutions than SVM

Relevance Vector Machines (2) RVM intend to mirror the structure of the SVM and use a Bayesian treatment to remove the limitations of SVM the kernel functions are simply treated as basis functions, rather than dot-product in some space degree of belief 尚未收到data,但是先猜測W的分佈

Bayesian Inference Bayesian inference allows one to model uncertainty about the world and outcomes of interest by combining common-sense knowledge and observational evidence. capture our beliefs about the situation before seeing the data

Relevance Vector Machines (3) In the Bayesian framework, we use a prior distribution over w to avoid overfitting where α is a hyperparameter which control the model parameter w

Relevance Vector Machines (4) Goal: find most probable α* and β* to compute the predictive distribution over tnew for a new input xnew, i.e. p(tnew | xnew, X, t, α*, β*) Maximize the likelihood function to obtain α* and β* : p(t|X, α, β) Training data and their target values

Relevance Vector Machines (5) RVM utilize the “automatic relevance determination” to achieve sparsity where αm represents the precision of wm In the procedure of finding αm*, some αm will become infinity which leads the corresponding wm to be zero  remain relevance vectors ! Maximize the likelihood function

Comparisons - Regression RVM (on standard deviation predictive distribution) SVM

Comparisons - Regression

Comparison - Classification SVM RVM

Comparison - Classification

Comparisons RVM are much sparser and make probabilistic prediction RVM gives better generalization in regression SVM gives better generalization in classification RVM is computationally demanding while learning

Outline Introduction to kernel methods Support vector machines (SVM) Relevance vector machines (RVM) Applications Conclusions

Applications (1) SVM for face detection

Applications (2) MIT Marti Hearst, “ Support Vector Machines” ,1998

Applications (3) In the feature-matching based object tracking, SVM are used to detect false feature matches $$$伊利諾 ieee Weiyu Zhu et al., “Tracking of Object with SVM Regression” , 2001

Applications (4) Recovering 3D human poses by RVM IEEE A. Agarwal and B. Triggs, “3D Human Pose from Silhouettes by Relevance Vector Regression” 2004

Outline Introduction to kernel methods Support vector machines (SVM) Relevance vector machines (RVM) Applications Conclusions

Conclusions The SVM is a learning machine based on kernel method and generalization theory which can perform binary classification and real valued function approximation tasks The RVM have the same model as SVM but provides probabilistic prediction and sparser solutions

References www.support-vector.net N. Cristianini and J. Shawe-Taylor, “An Introduction to Support Vector Machines and Other Kernel-based Learning Methods,” Cambridge University Press,2000 M. E. Tipping, “Sparse Bayesian Learning and the Relevance Vector Machine,” Journal of Machine Learning Research, 2001

Underfitting and Overfitting underfitting-too simple overfitting-too complex new data Adapted from http://www.dtreg.com/svm.htm