Download presentation
Presentation is loading. Please wait.
Published byBridget McKinney Modified over 9 years ago
1
ENEE698A Graduate Seminar Reproducing Kernel Hilbert Space (RKHS), Regularization Theory, and Kernel Methods Shaohua (Kevin) Zhou Center for Automation Research Department of Electrical and Computer Engineering University of Maryland, College Park
2
ENEE698A Graduate Seminar Overview Reproducing Kernel Hilbert Space (RKHS) –From R N to RKHS Regularization Theory with RKHS –Regularization Network (RN) –Support Vector Regression (SVR) –Support Vector Classification (SVC) Kernel Methods –Kernel Principal Component Analysis (KPCA) –More examples
3
ENEE698A Graduate Seminar Vector Space R N Positive definite matrix S=[s i (j)] –S = [s 1,s 2,…,s N ] –Eigensystem : S = n=1:N n n n T Inner product = f T S -1 g – = n n -1 f T n n T g = n n -1 (f, n (g, n –(u,v) = u T v, regular inner product Two properties: – = s i T S -1 s j = s i T e j = s i (j) – = s i T S -1 f = e i T f = f(i) with f=[f(1),f(2),…,f(N)] T
4
ENEE698A Graduate Seminar Reproducing Kernel Hilbert Space (RKHS) Positive kernel function k x (.)=k(x,.) –Mercer’s theorem –Eigensystem : k(x,y)= n=1:∞ n n (x) n (y) with n=1:∞ n 2 <∞ Inner product H – H = n n -1 (f, n (g, n –(u,v) = ∫u(y)v(y)dy, regular inner product Two properties: – H = k(x,y) – H = f(x) reproducing property
5
ENEE698A Graduate Seminar More on RKHS Let f(y) be an element in RKHS – f(y) = n=1:∞ a n n (y) –(f, n ) = a n – H = n=1:∞ n -1 a n 2 One particular function f(y) –f(y) = i=1:n c i k(y,x i ) –Is f(y) in the RKHS? – H = i=1:n j=1:n c i c j k(x i,x j ) = c T K c with c=[c 1,c 2,…, c i ] T and K=[k(x i,x j )] the Gram matrix
6
ENEE698A Graduate Seminar More on RKHS Nonlinear mapping : R N R ∞ – (x)=[ 1 1/2 1 (x),…, n 1/2 n (x),…] T Regular inner product in feature space R ∞ –( (x), (y)) = (x) T (y) = n=1:∞ n 1/2 n (x) n 1/2 n (y) = k(x,y) = H
7
ENEE698A Graduate Seminar Kernel Choices Gaussian kernel or RBF kernel – k(x,y)=exp(- -2 ||x-y|| 2 ) Polynomial kernel –k(x,y) = ((x,y)+d) p Construction rule –Covariance function of Gaussian processes –k(x,y) = ∫g(x,z)g(z,y)dz –k(x,y) = c, c>0 –k(x,y) = k 1 (x,y) + k 2 (x,y) –k(x,y) = k 1 (x,y) * k 2 (x,y)
8
ENEE698A Graduate Seminar Regularization Theory Regularization task –min f in H J(f) = [ i=1:n L(y i,f(x i )) + H ], where L is lost function and H is a stabilizer. Optimal solution –f(x)= i=1:n c i k(x,x i ) = [k(x,x 1 ),…,k(x,x n )]c –{h i (x)=k(x,x i ); i=1,…,n} are basis functions –Optimal coefficients {c i ; i=1,…,n} depend on the function L and
9
ENEE698A Graduate Seminar Regularization Network (RN) RN assumes a quadratic loss function –min f in H J(f) = [ i=1:n (y i -f(x i )) 2 + H ] Find {c i } –[f(x 1 ), f(x 2 ), …, f(x n )] T = Kc –J(f) = (y-Kc) T (y-Kc) + c T Kc –c = (K+ ) -1 y Practical considerations –One term of intercept f(x) = i=1:n c i k(x,x i )+b –Too many coefficients Support vector regression (SVR)
10
ENEE698A Graduate Seminar Support Vector Regression (SVR) SVR assumes an –insensitive loss function –min f in H J(f) = [ i=1:n |y i -f(x i )| + H ], with |x| = max(0, |x|- ) Primal problem –min J(f, , )= i=1:n ( i + i ) + H –s.t. (1) f(x i )-y i =0; (4) i >=0 –Quadratic programming (QP) Dual problem –x i is called support vector (SV) if its Langrange multipler is nonzero
11
ENEE698A Graduate Seminar Support Vector Classification (SVC) SVR assumes a soft margin loss function –min f in H J(f) =[ i=1:n |1-y i f(x i )| + H ], with |x| = max(0, x) –Determine the label of x as sgn( i c i y i k(x,x i )+b) Primal problem –min J(f, )= i=1:n i + H –s.t. (1) 1- y i f(x i ) =0; –Quadratic programming (QP) Dual problem –x i is called support vector (SV) if its Langrange multipler is nonzero
12
ENEE698A Graduate Seminar Kernel Methods General strategy of kernel methods –Nonlinear mapping : R N R ∞ embedded in the kernel function –Linear learning methods employing geometry / linear algebra –Kernel trick: cast all computations in dot product
13
ENEE698A Graduate Seminar Gram Matrix Gram matrix (dot product matrix, kernel matrix) –Covariance matrix of any Gaussian process for any finite sample –Combines the information of the data and the kernel –Contains all needed information for the learning kernel –K = [k(x i,x j )] = [ (x i ) T (x j )] = T where = [ (x 1 ), (x 2 ),…, (x n )]
14
ENEE698A Graduate Seminar Geometry in the RKHS Distance in the RKHS –( (x)- (y)) T ( (x)- (y)) = (x) T (x)+ (y) T (y)–2 (x) T (y) = k(x,x) + k(y,y)- 2k(x,y) Distance to center – 0 = i=1:n (x i )/n = 1/n –( (x)- 0 ) T ( (x)- 0 ) = (x) T (x) + 0 T 0 – 2 (x) T 0 = k(x,x) + 1 T T 1/n 2 – 2 (x) T 1/n = k(x,x) + 1 T K1/n 2 – 2 g (x) T 1/n –g (x) = T (x) = [k(x,x 1 ),…,k(x,x n )] T
15
ENEE698A Graduate Seminar Geometry in the RKHS Centered distance in the RKHS –( (x) – 0 ) T ( (y) – 0 ) = (x) T (y) + 0 T 0 – (x) T 0 – (y) T 0 = k(x,y) +1 T K1/n 2 -g (x) T 1/n-g (y) T 1/n Centered Gram matrix –K ^ = [ (x 1 )– 0,…, (x n )– 0 ] T [ (x 1 )– 0,…, (x n )– 0 ] = [ 11 T /n] T [ 11 T /n] = [ Q] T [ Q] = T Q T KQ Q = I n -11 T /n
16
ENEE698A Graduate Seminar Kernel Principal Component Analysis (KPCA) Kernel PCA –Mean 0 = i=1:n (x i )/n = 1/n –Covariance matrix C = n -1 [ (x 1 )– 0,…, (x n )– 0 ][ (x 1 )– 0,…, (x n )– 0 ] T = n -1 [ Q][ Q] T = n -1 T ; Q Eigensystem of C –The ‘reciprocal’ matrix: T u = K ^ u = u – n -1 T u = n -1 u; Cv= n -1 v; v= u –Normalizaton : v T v= u T K ^ u= u T u= v ~ = u -1/2
17
ENEE698A Graduate Seminar Kernel Principal Component Analysis (KPCA) Eigen-projection –( (x)– 0 ) T v ~ = ( (x)– 0 ) T Qu -1/2 = (x) T Qu -1/2 - 1 T T Qu -1/2 /n = g (x) T Qu -1/2 - 1 T KQu -1/2 /n
18
ENEE698A Graduate Seminar Kernel Principal Component Analysis (KPCA) Contour plots of PCA features
19
ENEE698A Graduate Seminar More Examples of Kernel Methods Examples –Kernel Fisher Discriminant Analysis (KFDA) –Kernel K-Means Clustering –Spectral Clustering and Graph Cutting –Kernel … –Kernel Independent Component Analysis (KICA) ?
20
ENEE698A Graduate Seminar Summary of Kernel Methods Pros and Cons –Nonlinear embedding –Linear algorithm –Large storage requirement –Computational inefficiency Important Issues –Kernel selection and design
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.