Download presentation
Published byKristopher Bullington Modified over 9 years ago
1
Crash Course on Machine Learning Part III
Several slides from Luke Zettlemoyer, Carlos Guestrin, Derek Hoiem, and Ben Taskar
2
Logistic regression and Boosting
Minimize loss fn Define where xj predefined Jointly optimize parameters w0, w1, … wn via gradient ascent. Boosting: Minimize loss fn Define where ht(xi) defined dynamically to fit data Weights j learned incrementally (new one for each training pass)
3
What you need to know about Boosting
Combine weak classifiers to get very strong classifier Weak classifier – slightly better than random on training data Resulting very strong classifier – can get zero training error AdaBoost algorithm Boosting v. Logistic Regression Both linear model, boosting “learns” features Similar loss functions Single optimization (LR) v. Incrementally improving classification (B) Most popular application of Boosting: Boosted decision stumps! Very simple to implement, very effective classifier
4
Linear classifiers – Which line is better?
w. = j w(j) x(j) Data (1) perceptron, (2) minize weights, (3) max margin Example i
5
Pick the one with the largest margin!
Margin: measures height of w.x+b plane at each point, increases with distance w.x + b >> 0 w.x + b > 0 w.x + b = 0 w.x + b < 0 w.x + b << 0 γ γ γ Max Margin: two equivalent forms γ (1) (2) w.x = j w(j) x(j)
6
How many possible solutions?
w.x + b = 0 Any other ways of writing the same dividing line? w.x + b = 0 2w.x + 2b = 0 1000w.x b = 0 …. Any constant scaling has the same intersection with z=0 plane, so same dividing line! Do we really want to max γ,w,b?
7
Review: Normal to a plane
w.x + b = 0 Key Terms -- projection of xj onto w -- unit vector normal to w
8
Idea: constrained margin
w.x + b = +1 Generally: w.x + b = 0 w.x + b = -1 Assume: x+ on positive line, x- on negative γ x+ x- Final result: can maximize constrained margin by minimizing ||w||2!!!
9
Max margin using canonical hyperplanes
w.x + b = +1 w.x + b = 0 w.x + b = -1 γ x+ x- The assumption of canonical hyperplanes (at +1 and -1) changes the objective and the constraints!
10
Support vector machines (SVMs)
w.x + b = +1 w.x + b = -1 w.x + b = 0 margin 2 Solve efficiently by quadratic programming (QP) Well-studied solution algorithms Not simple gradient ascent, but close Hyperplane defined by support vectors Could use them as a lower-dimension basis to write down line, although we haven’t seen how yet More on this later Support Vectors: data points on the canonical lines Non-support Vectors: everything else moving them will not change w
11
What if the data is not linearly separable?
Add More Features!!! What about overfitting?
12
What if the data is still not linearly separable?
+ C #(mistakes) First Idea: Jointly minimize w.w and number of training mistakes How to tradeoff two criteria? Pick C on development / cross validation Tradeoff #(mistakes) and w.w 0/1 loss Not QP anymore Also doesn’t distinguish near misses and really bad mistakes NP hard to find optimal solution!!!
13
Slack variables – Hinge loss
+ C Σj ξj w.x + b = +1 w.x + b = 0 w.x + b = -1 - ξj ξj≥0 ξ ξ ξ Slack Penalty C > 0: C=∞ have to separate the data! C=0 ignore data entirely! Select on dev. set, etc. ξ For each data point: If margin ≥ 1, don’t care If margin < 1, pay linear penalty
14
Side Note: Different Losses
Logistic regression: Boosting : SVM: Hinge loss: Hinge loss can lead to sparse solutions!!!! 0-1 Loss: All our new losses approximate 0/1 loss!
15
What about multiple classes?
16
One against All Learn 3 classifiers: + vs {0,-}, weights w+
Output for x: y = argmaxi wi.x w- w0 Could we learn lines above? Yes, but would require slack vairables? Any other way? Any problems?
17
Learn 1 classifier: Multiclass SVM
w+ Simultaneously learn 3 sets of weights: How do we guarantee the correct labels? Need new constraints! w- w0 Could we learn lines above? Currently, no slack variables… For j possible classes:
18
Learn 1 classifier: Multiclass SVM
Also, can introduce slack variables, as before:
19
What you need to know Maximizing margin Derivation of SVM formulation
Slack variables and hinge loss Relationship between SVMs and logistic regression 0/1 loss Hinge loss Log loss Tackling multiple class One against All Multiclass SVMs
20
Constrained optimization
No Constraint x ≥ -1 x ≥ 1 x*=0 x*=0 x*=1 How do we solve with constraints? Lagrange Multipliers!!!
21
Lagrange multipliers – Dual variables
Add Lagrange multiplier Rewrite Constraint Introduce Lagrangian (objective): We will solve: Why does this work at all??? min is fighting max! x<b (x-b)<0 maxα-α(x-b) = ∞ min won’t let that happen!! x>b, α>0 (x-b)>0 maxα-α(x-b) = 0, α*=0 min is cool with 0, and L(x, α)=x2 (original objective) x=b α can be anything, and L(x, α)=x2 (original objective) Since min is on the outside, can force max to behave and constraints will be satisfied!!! Add new constraint
22
Dual SVM derivation (1) – the linearly separable case
Original optimization problem: One Lagrange multiplier per example Rewrite constraints Lagrangian:
23
Dual SVM derivation (2) – the linearly separable case
Can solve for optimal w,b as function of α: Also, αk>0 implies constraint is tight So, in dual formulation we will solve for α directly! w,b are computed from α (if needed)
24
Dual SVM interpretation: Sparsity
w.x + b = +1 w.x + b = -1 w.x + b = 0 Support Vectors: αj≥0 Final solution tends to be sparse αj=0 for most j don’t need to store these points to compute w or make predictions Non-support Vectors: αj=0 moving them will not change w
25
Dual SVM formulation – linearly separable
Lagrangian: Substituting (and some math we are skipping) produces Dual SVM: Notes: max instead of min. One α for each training example Sums over all training examples scalars dot product
26
Dual for the non-separable case – same basic story (we will skip details)
Primal: Solve for w,b,α: Dual: What changed? Added upper bound of C on αi! Intuitive explanation: Without slack. αi ∞ when constraints are violated (points misclassified) Upper bound of C limits the αi, so misclassifications are allowed
27
Wait a minute: why did we learn about the dual SVM?
There are some quadratic programming algorithms that can solve the dual faster than the primal At least for small datasets But, more importantly, the “kernel trick”!!! Another little detour…
28
Reminder: What if the data is not linearly separable?
Use features of features of features of features…. Feature space can get really large really quickly!
29
Higher order polynomials
m – input features d – degree of polynomial number of monomial terms d=3 grows fast! d = 6, m = 100 about 1.6 billion terms d=2 number of input dimensions
30
Dual formulation only depends on dot-products, not on w!
Remember the examples x only appear in one dot product First, we introduce features: Next, replace the dot product with a Kernel: Why is this useful???
31
Efficient dot-product of polynomials
Polynomials of degree exactly d d=1 d=2 For any d (we will skip proof): Cool! Taking a dot product and exponentiating gives same results as mapping into high dimensional space and then taking dot produce
32
Finally: the “kernel trick”!
Never compute features explicitly!!! Compute dot products in closed form Constant-time high-dimensional dot-products for many classes of features But, O(n2) time in size of dataset to compute objective Naïve implements slow much work on speeding up
33
Common kernels Polynomials of degree exactly d
Polynomials of degree up to d Gaussian kernels Sigmoid And many others: very active area of research! Features space for Gaussian kernel Infinite!!!
34
Overfitting? Huge feature space with kernels, what about overfitting??? Maximizing margin leads to sparse set of support vectors Some interesting theory says that SVMs search for simple hypothesis with large margin Often robust to overfitting But everything overfits sometimes!!! Can control by: Setting C Choosing a better Kernel Varying parameters of the Kernel (width of Gaussian, etc.)
35
SVMs with kernels Choose a set of features and kernel function
Solve dual problem to get support vectors and i At classification time: if we need to build (x), we are in trouble! instead compute: Classify as Only need to store support vectors and i!!!
36
Reminder: Kernel regression
Instance-based learning: A distance metric Euclidian (and many more) How many nearby neighbors to look at? All of them A weighting function wi = exp(-D(xi, query)2 / Kw2) Nearby points to the query are weighted strongly, far points weakly. The KW parameter is the Kernel Width. Very important. How to fit with the local points? Predict the weighted average of the outputs: predict = Σwiyi / Σwi wi Kw D(x1,x2)
37
Kernels in logistic regression
Define weights in terms of data points: Derive simple gradient descent rule on i,b Similar tricks for all linear models: Perceptron, etc
38
SVMs v. Kernel Regression
or SVMs: Learn weights i (and bandwidth) Often sparse solution KR: Fixed “weights”, learn bandwidth Solution may not be sparse Much simpler to implement
39
What’s the difference between SVMs and Logistic Regression?
Loss function High dimensional features with kernels Hinge Loss Log Loss Yes!!! Actually, yes!
40
Probability Distribution
What’s the difference between SVMs and Logistic Regression? (Revisited) SVMs Logistic Regression Loss function Hinge loss Log-loss Kernels Yes! Solution sparse Semantics of learned model Often yes! Almost always no! Linear model from “Margin” Probability Distribution
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.