Download presentation
Presentation is loading. Please wait.
Published byFrank Anderson Modified over 9 years ago
1
LMS Algorithm in a Reproducing Kernel Hilbert Space Weifeng Liu, P. P. Pokharel, J. C. Principe Computational NeuroEngineering Laboratory, University of Florida Acknowledgment: This work was partially supported by NSF grant ECS-0300340 and ECS-0601271.
2
Outlines Introduction Least Mean Square algorithm (easy) Reproducing kernel Hilbert space (tricky) The convergence and regularization analysis (important) Learning from error models (interesting)
3
Introduction Puskal (2006) –Kernel LMS Kivinen, Smola (2004) –Online learning with kernels (more like leaky LMS) Moody, Platt (1990’s)—Resource allocation networks (growing and pruning)
4
LMS (1960, Widrow and Hoff) Given a sequence of examples from U×R: U: a compact set of R L. The model is assumed: The cost function:
5
LMS The LMS algorithm The weight after n iteration: (1) (2)
6
Reproducing kernel Hilbert space A continuous, symmetric, positive-definite kernel,a mapping Φ, and an inner product H is the closure of the span of all Φ(u). Reproducing Kernel trick The induced norm
7
RKHS Kernel trick: – An inner product in the feature space – A similarity measure you needed. Mercer’s theorem:
8
Common kernels Gaussian kernel Polynomial kernel
9
Kernel LMS Transform the input u i to Φ(u i ): Assume Φ(u i ) ∈ R M The model is assumed: The cost function:
10
Kernel LMS The KLMS algorithm The weight after n iteration: (3) (4)
11
Kernel LMS (5)
12
Kernel LMS After the learning, the input-output relation: (6)
13
KLMS vs. RBF KLMS: RBF: α satisfy G is the gram matrix: G(i,j)=ĸ(u i,u j ) RBF needs regularization. Does KLMS need regularization? (7) (8)
14
KLMS vs. LMS Kernel LMS is nothing but LMS in the feature space--a very high dimensional reproducing kernel Hilbert space (M>N) Eigen-spread is awful—does it converge?
15
Example: MG signal predication Time embedding: 10. Learn rate: 0.2 500 training data 100 test data point. Gaussian noise noise variance:.04
16
Example: MG signal predication MSELinear LMS KLMSRBF (λ=0) RBF (λ=.1) RBF (λ=1) RBF (λ=10) training0.0210.006000.00260.00360.010 test0.0260.00660.0190.00410.00500.014
17
Complexity Comparison RBFKLMSLMS ComputationO(N 3 )O(N 2 )O(L) MemoryO(N 2 +N*L)O(N*L)O(L)
18
The asymptotic analysis on convergence—small step-size theory Denote The correlation matrix is singular. Assume and
19
The asymptotic analysis on convergence—small step-size theory Denote we have
20
The weight stays at the initial place in the 0-eigen-value directions If we have
21
The 0-eigen-value directions does not affect the MSE Denote It does not care about the null space! It only focuses on the data space!
22
The minimum norm initialization The initialization gives the minimum norm possible solution.
23
Minimum norm solution
24
Learning is Ill-posed
25
Over-learning
26
Regularization Technique Learning from finite data is ill-posed. A priori information--Smoothness is needed. The norm of the function, which indicates the ‘slope’ of the linear operator is constrained. In statistical learning theory, the norm is associated with the confidence of uniform convergence!
27
Regularized RBF The cost function: or equivalently
28
KLMS as a learning algorithm The model with The following inequalities hold The proof…(H ∞ robust + triangle inequality + matrix transformation + derivative + …)
29
The solution of regularized RBF is The reason of ill-posedness is the inversion of the matrix (G+λI) The numerical analysis
30
The solution of KLMS is By the inequality we have
31
Example: MG signal predication weightKLMSRBF (λ=0) RBF (λ=.1) RBF (λ=1) RBF (λ=10) norm0.5204.8e+310.901.370.231
32
The conclusion The LMS algorithm can be readily used in a RKHS to derive nonlinear algorithms. From the machine learning view, the LMS method is a simple tool to have a regularized solution.
33
Demo
35
LMS learning model An event happens, and a decision made. If the decision is correct, nothing happens. If an error is incurred, a correction is made on the original model. If we do things right, everything is fine and life goes on. If we do something wrong, lessons are drawn and our abilities are honed.
36
Would we over-learn? If the real world is attempted to be modeled mathematically, what dimension is appropriate? Are we likely to over-learn? Are we using the LMS algorithm? What is good to remember the past? What is bad to be a perfectionist?
37
"If you shut your door to all errors, truth will be shut out."---Rabindranath Tagore
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.