By Viput Subharngkasen

Slides:



Advertisements
Similar presentations
Chapter Outline 3.1 Introduction
Advertisements

Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
PCA + SVD.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Performance Optimization
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Principal Component Analysis
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
1 Chapter 17: Introduction to Regression. 2 Introduction to Linear Regression The Pearson correlation measures the degree to which a set of data points.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Dominant Eigenvalues & The Power Method
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Summarized by Soo-Jin Kim
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
Introduction to Adaptive Digital Filters Algorithms
1 Techniques to control noise and fading l Noise and fading are the primary sources of distortion in communication channels l Techniques to reduce noise.
Rake Reception in UWB Systems Aditya Kawatra 2004EE10313.
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Signal and Noise Models SNIR Maximization Least-Squares Minimization MMSE.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
EE513 Audio Signals and Systems
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
A Semi-Blind Technique for MIMO Channel Matrix Estimation Aditya Jagannatham and Bhaskar D. Rao The proposed algorithm performs well compared to its training.
Linear Models for Classification
ADALINE (ADAptive LInear NEuron) Network and
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Intro. ANN & Fuzzy Systems Lecture 38 Mixture of Experts Neural Network.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Geometric Camera Calibration
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Principle Component Analysis (PCA) Networks (§ 5.8)
One-layer neural networks Approximation problems
Outlier Processing via L1-Principal Subspaces
Equalization in a wideband TDMA system
Principal Component Analysis (PCA)
Unfolding Problem: A Machine Learning Approach
لجنة الهندسة الكهربائية
Equalization in a wideband TDMA system
Principal Component Analysis
Instructor :Dr. Aamer Iqbal Bhatti
METHOD OF STEEPEST DESCENT
EE513 Audio Signals and Systems
6.5 Taylor Series Linearization
8. Stability, controllability and observability
Feature space tansformation methods
Whitening-Rotation Based MIMO Channel Estimation
Recursively Adapted Radial Basis Function Networks and its Relationship to Resource Allocating Networks and Online Kernel Learning Weifeng Liu, Puskal.
Lecture 8: Image alignment
Parametric Methods Berlin Chen, 2005 References:
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Unfolding with system identification
Calibration and homographies
Presentation transcript:

By Viput Subharngkasen Channel/System identification using Total Least Mean Squares Algorithm (TLMS) By Viput Subharngkasen

Outline Introduction/Motivation Review of TLMS algorithm Result Discussion Conclusion

Introduction/Motivation The TLMS algorithm is unsupervised learning adaptive linear combiner based on the total least mean squares of the minimum Raleigh quotient which used to extract minor features from the training sequences. In adaptive linear combiner, we use only input and output data to compute the transfer function of the analyzed channel/system. What’s happen when the input and output data that we get are corrupted with interference or we can not access the real input and output ports and can get only partial part of them. ????? How can we counter these problems ?????

Introduction/Motivation (cont.) We use TLMS algorithm to solve this problem compare with well-known algorithm, LMS algorithm. If interference only exists in the output of the analyzed system, the LMS algorithm can obtain the optimal solution. However, if the interference occur on both input and output end of the problem, the LMS algorithm is failed to achieve the optimal solution. On the other hand, the TLMS algorithm can solve for the desired solution in case of interference existed in both input and output end.

TLMS algorithm We have n-dimensional input vector X(k), the desire output d(k). Let Z(k) be the augmented vector of X(k) and d(k) Z(k) = [XT(k) | d(k)]T Assume that interference exists in both input and output called  X(k) and  d(k), and the corresponding set of adjustable weight w(k). Then the estimated output y(k) is a linear combination of the input and weight vector as y(k) = XT(k) W(k) To find the optimal solution, we will minimized the effect of the interference, thus, min E{||XT(k) | d(k)||2 } Where || . ||2 is the Euclidean Distance.

TLMS algorithm (cont.) The minimization can be achieved through min E{ | XT(k)WTLMS – d(k) |2 } min E{ | ZT(k)W(k) |2 } subject to ||W(k)|| 2 = c where X(k) = X(k) + X(k), d(k) = d(k) + d(k), Z(k) = Z(k) + Z(k), and W(k) = [WT(k) | Wk+1 ] and c is any positive constant Let R = E{Z(k) ZT(k)}, then the problem become to min E{ WT(k)RW(k) } subject to ||W(k)|| 2 = c By LaGrange expansion, to optimize this problem is to choose the eigenvector corresponded to the smallest eigenvalue of the augmented correlation matrix R

TLMS Algorithm (cont.) The search equation for eigenvector corresponded to the smallest eigenvalue is W(k+1) = W(k) + ( W(k) - ||W(k)||22 RW(k) ) Where k is an iteration number and  is the positive constant that control the rate of convergence and stability of algorithm. The equation can also be rewrite as Y(k) = ZT(k)W(k) W(k+1) = W(k) + ( W(k) - ||W(k)||22 Y(k)Z(k) ) The above equation is the one that I used to simulation in this study.

Result

The ideal transfer function = [-0.3 –0.9 0.8 –0.7 0.6] Result (cont.) The ideal transfer function = [-0.3 –0.9 0.8 –0.7 0.6] SNR Weight from TLMS Weight from LMS 10dB [-0.3003, -0.9016, 0.8091, -0.7046, 0.6180] [-0.2270, -0.6757, 0.6033, -0.5262, 0.4583] 5 dB [-0.3054, -0.9014, 0.7969, -0.7044, 0.5966] [-0.1977, -0.5550, 0.4932, -0.4351, 0.3714] 2 dB [-0.2924, -0.9080, 0.8053, -0.7016, 0.6054] [-0.1538, -0.5097, 0.4336, -0.3777, 0.3291] 0 dB [-0.3148, -0.8988, 0.7875, -0.6937, 0.6152] [-0.1678, -0.4230, 0.3912, -0.3481, 0.3209] -2dB [-0.2983, -0.8934, 0.8041, -0.6942, 0.5920] [-0.1384, -0.3882, 0.3428, -0.3088, 0.2382] -5dB [-0.2966, -0.8950, 0.7934, -0.7026, 0.5984] [-0.0869, -0.3227, 0.2699, -0.2470, 0.2173]

Discussion From the result, the TLMS algorithm performs better than the LMS algorithm. In the error plot, after using 30,000 samples, the TLMS algorithm’s error plot are minimized whereas the error plot of LMS algorithm can not be minimized even though the large numbers of samples are employed. The error resulted from the TLMS algorithm are averaged around –15dB which is considerably negligible, while the error resulted from the LMS algorithm are around 0-5dB. In the plot, the errors of simulation still fluctuate due to the nature of Gaussian interference, and self adaptation of the algorithm.

Discussion (cont.) In the table shown the computed transfer function between the TLMS and LMS algorithm, we can see that for the high SNR, both algorithms perform well, but the TLMS one will a bit better. At the low SNR, the LMS algorithm can not find the solution whereas the TLMS algorithm can find the solution which is very close to the true transfer function of the system. Moreover, the TLMS algorithm also has small variance in compute the error. (see in report.)

Conclusion The TLMS algorithm is unsupervised learning adaptive linear combiner based on the total least mean squares of the minimum Raleigh quotient which used to extract minor features from the training sequences. From the simulation result, we can see that the TLMS algorithm is out performance the LMS algorithm in case of there exist the interference in both input and output ends. The advantages of using the TLMS algorithm are its ability to perform in a system, which is corrupted with noise, and the stability of its results.