Download presentation
Presentation is loading. Please wait.
Published bySidsel Toft Modified over 5 years ago
1
Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
Linear Classifiers Based on slides by William Cohen, Andrej Karpathy, Piyush Rai
8
Linear Classifiers Let’s simplify life by assuming:
Every instance is a vector of real numbers, x=(x1,…,xn). (Notation: boldface x is a vector.) First we consider only two classes, y=(+1) and y=(-1) A linear classifier is vector w of the same dimension as x that is used to make this prediction:
9
Visually, x · w is the distance you get if you “project x onto w”
In 3d: lineplane In 4d: planehyperplane … X2 . w The line perpendicular to w divides the vectors classified as positive from the vectors classified as negative. -W
10
w -W Wolfram MathWorld Mediaboost.com
11
where b=w0 is called bias
Notice that the separating hyperplane goes through the origin…if we don’t want this we can preprocess our examples: or where b=w0 is called bias
13
Back to Image Classification
14
3072 numbers in total reshaped into a column vector x
18
Interactive Web Demo:
20
Perceptron learning B A
Compute: yi = sign(wk . xi ) ^ instance xi B A If mistake: wk+1 = wk + yi xi yi ^ yi 1957: The perceptron algorithm by Frank Rosenblatt 1960: Perceptron Mark 1 Computer – hardware implementation 1969: Minksky & Papert book shows perceptrons limited to linearly separable data 1970’s: learning methods for two-layer neural networks
28
Question
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.