SVD: Physical Interpretation and Applications 2018-11-30 SVD: Physical Interpretation and Applications Adeel Razi Wireless Communications Lab (WCL) Dept. of Electrical Engineering & Telecommunications University of New South Wales Anyone who has never made a mistake has never tried anything new -- Einstein
The plan today Singular Value Decomposition Basic intuition 2018-11-30 The plan today Singular Value Decomposition Basic intuition Formal definition SVD and Multiuser MIMO Other Applications
Geometric analysis of linear transformations 2018-11-30 Geometric analysis of linear transformations We want to know what a linear transformation A does Need some simple and “comprehendible” representation of the matrix of A. Let’s look what A does to some vectors Since A(v) = A(v), it’s enough to look at vectors v of unit length A
The geometry of linear transformations 2018-11-30 The geometry of linear transformations A linear (non-singular) transform A always takes hyper-spheres to hyper-ellipses. A A
The geometry of linear transformations 2018-11-30 The geometry of linear transformations Thus, one good way to understand what A does is to find which vectors are mapped to the “main axes” of the ellipsoid. A A
Geometric analysis of linear transformations 2018-11-30 Geometric analysis of linear transformations If we are lucky: A = V VT, V orthogonal (true if A is symmetric) The eigenvectors of A are the axes of the ellipse A
Symmetric matrix: eigen decomposition 2018-11-30 Symmetric matrix: eigen decomposition In this case A is just a scaling matrix. The eigen decomposition of A tells us which orthogonal axes it scales, and by how much: A 1 2 1
General linear transformations: SVD 2018-11-30 General linear transformations: SVD In general A will also contain rotations, not just scales: 1 1 1 2 A
General linear transformations: SVD 2018-11-30 General linear transformations: SVD 1 1 1 2 A orthonormal orthonormal
SVD more formally SVD exists for any matrix Formal definition: 2018-11-30 SVD more formally SVD exists for any matrix Formal definition: For square matrices A Rnn, there exist orthogonal matrices U, V Rnn and a diagonal matrix , such that all the diagonal values i of are non-negative and =
2018-11-30 SVD more formally The diagonal values of (1, …, n) are called the singular values. It is accustomed to sort them: 1 2 … n The columns of U (u1, …, un) are called the left singular vectors. They are the axes of the ellipsoid. The columns of V (v1, …, vn) are called the right singular vectors. They are the pre-images of the axes of the ellipsoid. =
Singular values and Eigenvalues Relation 2018-11-30 Singular values and Eigenvalues Relation For an m n matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: mm mn V is nn The columns of U are orthogonal eigenvectors of AAT. The columns of V are orthogonal eigenvectors of ATA. Singular values. Eigenvalues 1 … r of AAT are the eigenvalues of ATA.
Matrix rank The rank of A is the number of non-zero singular values n 2018-11-30 Matrix rank The rank of A is the number of non-zero singular values n 1 2 n m =
Beamforming for Multiuser MIMO System 2018-11-30 Beamforming for Multiuser MIMO System MIMO systems are always not transparent to the user Smart antenna is not suitable Solving None Light-of-Sight (NLOS) problem Best solution: Singular Value Decomposition (SVD) Cluster 1 Tx antenna 1 Rx antenna 1 Tx antenna 2 Rx antenna 2 Tx antenna 3 Rx antenna 3 Tx antenna 4 Rx antenna 4 Cluster 2 Undesired spreaded beams Desired beams Interference
SVD and MIMO MIMO channel can be modeled as complex matrix 2018-11-30 SVD and MIMO MIMO channel can be modeled as complex matrix The MIMO channel capacity CSI is needed at Tx MIMO Channel SVD
Beamforming for MIMO System 2018-11-30 Beamforming for MIMO System The goal of beamforming is to diagonalize channel matrix Performance approach to the Shannon capacity
Beamforming for MIMO System 2018-11-30 Beamforming for MIMO System Let channel matrix Post-processing Pre-processing
Beamforming for MIMO System 2018-11-30 Beamforming for MIMO System By applying SVD, H can be decomposed as Data streams arrive orthogonally without interference between streams Array processing at transmitter Array processing at receiver
Matrix inverse and solving linear systems 2018-11-30 Matrix inverse and solving linear systems Matrix inverse: So, to solve
Solving least-squares systems 2018-11-30 Solving least-squares systems We tried to solve Ax=b when A was rectangular: Seeking solutions in least-squares sense: A x b =
Solving least-squares systems 2018-11-30 Solving least-squares systems When A is full-rank, (normal equations). So: = 12 n2 1 n
Solving least-squares systems 2018-11-30 Solving least-squares systems Substituting in the normal equations: 1/12 1 1/1 = 1/n2 n 1/n
Pseudoinverse The matrix we found is called the pseudoinverse of A. 2018-11-30 Pseudoinverse The matrix we found is called the pseudoinverse of A. Definition using reduced SVD: If some of the i are zero, put zero instead of 1/I .
Pseudo-inverse Pseudoinverse A+ exists for any matrix A. 2018-11-30 Pseudo-inverse Pseudoinverse A+ exists for any matrix A. Its properties: If A is mn then A+ is nm Acts a little like real inverse:
Solving least-squares systems 2018-11-30 Solving least-squares systems When A is not full-rank: ATA is singular There are multiple solutions to the normal equations: Thus, there are multiple solutions to the least-squares. The SVD approach still works! In this case it finds the minimal-norm solution:
Thank you! The important thing is not to stop questioning -- Einstein 2018-11-30 Thank you! The important thing is not to stop questioning -- Einstein
2018-11-30 Backup Slides
2018-11-30 Principal components Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). If the eigenvalues are more or less the same – there is no preferable direction. Note: the eigenvalues are always non-negative. Think why…
Principal components S looks like this: S looks like this: 2018-11-30 Principal components There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this: is close to zero, much smaller than .
SVD and MIMO Consider a 2X2 MIMO case 2018-11-30 SVD and MIMO Consider a 2X2 MIMO case Every input is a vector in the following 2-dimensional space (T1, T2) Matrix H rotates the basis vectors (T1, T2) into new set of vectors (R1, R2) Now the output is a vector in the new space of (R1, R2) New rotated vectors might not be orthogonal or they might not be independent of each other T1 T2 H T1 T2 R1 R2
2018-11-30 SVD and MIMO If H is full rank matrix, R1 and R2 will be orthogonal (and can be termed as new basis vectors) T1 T2 h11T1 + h12T2 = R1 h21T1 + h22T2 = R2
SVD and MIMO: Example Example: 2018-11-30 T1 T2 T1 + T2 = R1,R2 0.7T1 + 0.7T2 = v1 0.7T1 - 0.7T2 = v2 Received signal will only lie on this line