Linear Classifiers (perceptrons)

Slides:



Advertisements
Similar presentations
Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Advertisements

Projects Project 4 due tonight Project 5 (last project!) out later
Classification Classification Examples
ECG Signal processing (2)
Multiclass SVM and Applications in Object Classification
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Data Mining Classification: Alternative Techniques
An Introduction of Support Vector Machine
Support Vector Machines
1 Lecture 5 Support Vector Machines Large-margin linear classifier Non-separable case The Kernel trick.
Support vector machine
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Classification and Decision Boundaries
Support Vector Machines (and Kernel Methods in general)
Support Vector Machines and Kernel Methods
CS 188: Artificial Intelligence Fall 2009
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Ensemble Learning: An Introduction
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Linear Discriminators Chapter 20 From Data to Knowledge.
Online Learning Algorithms
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Support Vector Machines Piyush Kumar. Perceptrons revisited Class 1 : (+1) Class 2 : (-1) Is this unique?
Topics on Final Perceptrons SVMs Precision/Recall/ROC Decision Trees Naive Bayes Bayesian networks Adaboost Genetic algorithms Q learning Not on the final:
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 – Fall 2014.
8/25/05 Cognitive Computations Software Tutorial Page 1 SNoW: Sparse Network of Winnows Presented by Nick Rizzolo.
Support Vector Machines Mei-Chen Yeh 04/20/2010. The Classification Problem Label instances, usually represented by feature vectors, into one of the predefined.
Classification and Ranking Approaches to Discriminative Language Modeling for ASR Erinç Dikici, Murat Semerci, Murat Saraçlar, Ethem Alpaydın 報告者:郝柏翰 2013/01/28.
CSE 446 Perceptron Learning Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Classification: Feature Vectors
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free : 2 YOUR_NAME.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
Ohad Hageby IDC Support Vector Machines & Kernel Machines IP Seminar 2008 IDC Herzliya.
CS 188: Artificial Intelligence Fall 2007 Lecture 25: Kernels 11/27/2007 Dan Klein – UC Berkeley.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
Lecture 5: Statistical Methods for Classification CAP 5415: Computer Vision Fall 2006.
6.S093 Visual Recognition through Machine Learning Competition Image by kirkh.deviantart.com Joseph Lim and Aditya Khosla Acknowledgment: Many slides from.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Announcements  Midterm 2  Monday 4/20, 6-9pm  Rooms:  2040 Valley LSB [Last names beginning with A-C]  2060 Valley LSB [Last names beginning with.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein – UC Berkeley 1.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines
Fun with Hyperplanes: Perceptrons, SVMs, and Friends
Artificial Intelligence
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Perceptrons Lirong Xia.
Nonparametric Methods: Support Vector Machines
CS 188: Artificial Intelligence Fall 2008
CS 4/527: Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
Instance Based Learning
CS 188: Artificial Intelligence Fall 2008
Support Vector Machines
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Perceptrons Lirong Xia.
Support Vector Machines 2
Presentation transcript:

MIRA, SVM, k-NN Lirong Xia Tue, April 15, 2014

Linear Classifiers (perceptrons) Inputs are feature values Each feature has a weight Sum is the activation If the activation is: Positive: output +1 Negative, output -1

Classification: Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples

Binary Decision Rule In the space of feature vectors Examples are points Any weight vector is a hyperplane One side corresponds to Y = +1 Other corresponds to Y = -1

Learning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e. y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.

Multiclass Decision Rule If we have multiple classes: A weight vector for each class: Score (activation) of a class y: Prediction highest score wins Binary = multiclass where the negative class has weight zero

Learning: Multiclass Perceptron Start with all weights = 0 Pick up training examples one by one Predict with current weights If correct, no change! If wrong: lower score of wrong answer, raise score of right answer

Examples: Perceptron Separable Case

Today Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)

Properties of Perceptrons Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability

Examples: Perceptron Non-Separable Case

Problems with the Perceptron Noise: if the data isn’t separable, weights might thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a “barely” separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfitting

Fixing the Perceptron Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake …but, minimizes the change to w The +1 helps to generalize *Margin Infused Relaxed Algorithm

Minimum Correcting Update min not τ=0, or would not have made an error, so min will be where equality holds

Maximum Step Size In practice, it’s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data

Outline Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)

Linear Separators Which of these linear separators is optimal?

Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at once MIRA SVM

Classification: Comparison Naive Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate

Extension: Web Search Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR Web search: not exactly classification, but rather ranking x = “Apple Computers”

Feature-Based Ranking x = “Apple Computers”

Perceptron for Ranking Input x Candidates y Many feature vectors: f(x,y) One weight vector: w Prediction: Update (if wrong):

Pacman Apprenticeship! Examples are states s Candidates are pairs (s,a) “correct” actions: those taken by expert Features defined over (s,a) pairs: f(s,a) Score of a q-state (s,a) given by: w×f(s,a) How is this VERY different from reinforcement learning?

Outline Fixing the Perceptron: MIRA Support Vector Machines k-nearest neighbor (KNN)

Case-Based Reasoning Similarity for classification . Similarity for classification Case-based reasoning Predict an instance’s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions . . Generated data . . . 1-NN

Parametric / Non-parametric Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric

Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What’s the similarity function? Dot product of two images vectors? Usually normalize vectors so ||x||=1 min = 0 (when?), max = 1(when?)

Basic Similarity Many similarities based on feature dot products: If features are just the pixels: Note: not all similarities are of this form

This and next few slides adapted from Xiao Hu, UIUC Invariant Metrics Better distances use knowledge about vision Invariant metrics: Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness… E.g.: 16*16=256 pixels; a point in 256-dim space Small similarity in R256 (why?) How to incorporate invariance into similarities? This and next few slides adapted from Xiao Hu, UIUC

Invariant Metrics Each example is now a curve in R256 Rotation invariant similarity: s’=max s(r( ),r( )) E.g. highest similarity between images’ rotation lines

Invariant Metrics Problems with s’: Tangent distance: Hard to compute Allows large transformations(6→9) Tangent distance: 1st order approximation at original points. Easy to compute Models small rotations

Template Deformation Deformable templates: An “ideal” version of each category Best-fit to image using min variance Cost for high distortion of template Cost for image points being far from distorted template Used in many commercial digit recognizers Examples from [Hastie 94]

A Tale of Two Approaches… Nearest neighbor-like approaches Can use fancy similarity functions Don’t actually get to do explicit learning Perceptron-like approaches Explicit training to reduce empirical error Can’t use fancy similarity, only linear Or can they? Let’s find out!

Perceptron Weights What is the final value of a weight wy of a perceptron? Can it be any real vector? No! it’s build by adding up inputs Can reconstruct weight vectors (the primal representation) from update counts (the dual representation) primal dual

Dual Perceptron How to classify a new example x? If someone tells us the value of K for each pair of examples, never need to build the weight vectors!

Dual Perceptron Start with zero counts (alpha) Pick up training instances one by one Try to classify xn, If correct, no change! If wrong: lower count of wrong class (for this instance), raise score of right class (for this instance)

Kernelized Perceptron If we had a black box (kernel) which told us the dot product of two examples x and y: Could work entirely with the dual representation No need to ever take dot products (“kernel trick”) Like nearest neighbor – work with black-box similarities Downside: slow if many examples get nonzero alpha