CS480/680: Intro to ML Lecture 09: Kernels 23/02/2019 Yao-Liang Yu
Announcement Final exam: Wednesday December 19, 4:00-6:30PM, PAC 8 Project proposal due tonight Use the provided latex template! A3 will be available tonight 23/02/2019 Yao-Liang Yu
Outline Feature map Kernels The Kernel Trick Advanced 23/02/2019 Yao-Liang Yu
XOR revisited 23/02/2019 Yao-Liang Yu
Quadratic classifier Weights (to be learned) 23/02/2019 Yao-Liang Yu
The power of lifting Rd Rd*d+d+1 Feature map 23/02/2019 Yao-Liang Yu
Example 23/02/2019 Yao-Liang Yu
Does it work? 23/02/2019 Yao-Liang Yu
Curse of dimensionality? computation in this space now But, all we need is the dot product !!! This is still computable in O(d)! 23/02/2019 Yao-Liang Yu
Feature transform NN: learn ϕ simultaneously with w Here: choose a nonlinear ϕ so that for some f : RR save computation 23/02/2019 Yao-Liang Yu
Outline Feature map Kernels The Kernel Trick Advanced 23/02/2019 Yao-Liang Yu
Reverse engineering Start with some function , s.t. exists feature transform ϕ with As long as k is efficiently computable, don’t care the dim of ϕ (could be infinite!) Such k is called a (reproducing) kernel. 23/02/2019 Yao-Liang Yu
Examples Polynomial kernel Gaussian Kernel Laplace Kernel Matérn Kernel 23/02/2019 Yao-Liang Yu
Verifying a kernel For any n, for any x1, x2, …, xn, the kernel matrix K with is symmetric and positive semidefinite ( ) Symmetric: Kij = Kji Positive semidefinite (PSD): for all 23/02/2019 Yao-Liang Yu
Kernel calculus If k is a kernel, so is λk for any λ ≥ 0 If k1 and k2 are kernels, so is k1+k2 k1 with 𝜑1, k2 with 𝜑2 k1+k2 with ?? If k1 and k2 are kernels, so is k1k2 23/02/2019 Yao-Liang Yu
Outline Feature map Kernels The Kernel Trick Advanced 23/02/2019 Yao-Liang Yu
Kernel SVM (dual) With α, but ϕ is implicit… 23/02/2019 Yao-Liang Yu
Does it work? 𝑘 𝐱, 𝐱 ′ = (𝐱 𝑇 𝐱′+1 ) 2 23/02/2019 Yao-Liang Yu
Testing Given test sample x’, how to perform testing? No explicit access to ϕ, again! kernel dual variables training set 23/02/2019 Yao-Liang Yu
Tradeoff Previously: training O(nd), test O(d) Kernel: training O(n2d), test O(nd) Nice to avoid explicit dependence on h (could be inf) But if n is also large… (maybe later) 23/02/2019 Yao-Liang Yu
Learning the kernel (Lanckriet et al.’04) Nonnegative combination of t pre-selected kernels, with coefficients ζ simultaneously learned 23/02/2019 Yao-Liang Yu
Logistic regression revisited kernelize Representer Theorem (Wabha, Schölkopf, Herbrich, Smola, Dinuzzo, …). The optimal w has the following form: 23/02/2019 Yao-Liang Yu
Outline Feature map Kernels The Kernel Trick Advanced 23/02/2019 Yao-Liang Yu
A closer look What does it mean to use a kernel k? testing: 𝑓 𝑥 =𝑤 𝑇 𝜑(𝑥)= 𝑖=1 𝑛 𝛽 𝑖 𝑘(𝑥 𝑖 ,𝑥) Take 𝑘 𝑥, 𝑥 ′ = 𝑥 𝑇 𝑥 ′ 2 𝑓 𝑥 = 𝑖=1 𝑛 𝛽 𝑖 𝑥 𝑇 𝑧 𝑖 2 = 𝑖=1 𝑛 𝛽 𝑖 𝑗=1 𝑑 𝑘=1 𝑑 𝑥 𝑗 𝑥 𝑘 𝑧 𝑗𝑖 𝑧 𝑘𝑖 = 𝑗=1 𝑑 𝑘=1 𝑑 𝑥 𝑗 𝑥 𝑘 ( 𝑖=1 𝑛 𝛽 𝑖 𝑧 𝑗𝑖 𝑧 𝑘𝑖 ) = 𝑗=1 𝑑 𝑘=1 𝑑 𝜇 𝑗𝑘 𝑥 𝑗 𝑥 𝑘 23/02/2019 Yao-Liang Yu
Reproducing Kernel Hilbert Space Fix x, k(., x) : X R, z | k(z,x) Vary x in X: { k(., x): x in X } A set of functions from X to R Take linear combinations: { 𝑖=1 𝑛 𝛽 𝑖 k(., xi): xi in X } Define dot product: < 𝑖=1 𝑛 𝛽 𝑖 k(., xi), 𝑗=1 𝑚 𝛾 𝑗 k(., zj)> = 𝑖=1 𝑛 𝑗=1 𝑚 𝛽 𝑖 𝛾 𝑗 k(xi, zj) Complete Reproducing: <f, k(.,x)> = f(x) 23/02/2019 Yao-Liang Yu
Universal approximation (Micchelli, Xu, Zhang’06) Universal kernel. For any compact set Z, for any continuous function f : Z R, for any ε > 0, there exist x1, x2, …, xn in Z and α1,α2,…,αn in R such that decision boundary kernel methods Example. The Gaussian kernel. 23/02/2019 Yao-Liang Yu
Kernel mean embedding (Smola, Song, Gretton, Schölkopf, …) feature map of some kernel Characteristic kernel: the above mapping is 1-1 Completely preserve the information in the distribution P Lots of applications 23/02/2019 Yao-Liang Yu
Questions? 23/02/2019 Yao-Liang Yu