Download presentation
Presentation is loading. Please wait.
1
SVD: Physical Interpretation and Applications
SVD: Physical Interpretation and Applications Adeel Razi Wireless Communications Lab (WCL) Dept. of Electrical Engineering & Telecommunications University of New South Wales Anyone who has never made a mistake has never tried anything new -- Einstein
2
The plan today Singular Value Decomposition Basic intuition
The plan today Singular Value Decomposition Basic intuition Formal definition SVD and Multiuser MIMO Other Applications
3
Geometric analysis of linear transformations
Geometric analysis of linear transformations We want to know what a linear transformation A does Need some simple and “comprehendible” representation of the matrix of A. Let’s look what A does to some vectors Since A(v) = A(v), it’s enough to look at vectors v of unit length A
4
The geometry of linear transformations
The geometry of linear transformations A linear (non-singular) transform A always takes hyper-spheres to hyper-ellipses. A A
5
The geometry of linear transformations
The geometry of linear transformations Thus, one good way to understand what A does is to find which vectors are mapped to the “main axes” of the ellipsoid. A A
6
Geometric analysis of linear transformations
Geometric analysis of linear transformations If we are lucky: A = V VT, V orthogonal (true if A is symmetric) The eigenvectors of A are the axes of the ellipse A
7
Symmetric matrix: eigen decomposition
Symmetric matrix: eigen decomposition In this case A is just a scaling matrix. The eigen decomposition of A tells us which orthogonal axes it scales, and by how much: A 1 2 1
8
General linear transformations: SVD
General linear transformations: SVD In general A will also contain rotations, not just scales: 1 1 1 2 A
9
General linear transformations: SVD
General linear transformations: SVD 1 1 1 2 A orthonormal orthonormal
10
SVD more formally SVD exists for any matrix Formal definition:
SVD more formally SVD exists for any matrix Formal definition: For square matrices A Rnn, there exist orthogonal matrices U, V Rnn and a diagonal matrix , such that all the diagonal values i of are non-negative and =
11
SVD more formally The diagonal values of (1, …, n) are called the singular values. It is accustomed to sort them: 1 2 … n The columns of U (u1, …, un) are called the left singular vectors. They are the axes of the ellipsoid. The columns of V (v1, …, vn) are called the right singular vectors. They are the pre-images of the axes of the ellipsoid. =
12
Singular values and Eigenvalues Relation
Singular values and Eigenvalues Relation For an m n matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: mm mn V is nn The columns of U are orthogonal eigenvectors of AAT. The columns of V are orthogonal eigenvectors of ATA. Singular values. Eigenvalues 1 … r of AAT are the eigenvalues of ATA.
13
Matrix rank The rank of A is the number of non-zero singular values n
Matrix rank The rank of A is the number of non-zero singular values n 1 2 n m =
14
Beamforming for Multiuser MIMO System
Beamforming for Multiuser MIMO System MIMO systems are always not transparent to the user Smart antenna is not suitable Solving None Light-of-Sight (NLOS) problem Best solution: Singular Value Decomposition (SVD) Cluster 1 Tx antenna 1 Rx antenna 1 Tx antenna 2 Rx antenna 2 Tx antenna 3 Rx antenna 3 Tx antenna 4 Rx antenna 4 Cluster 2 Undesired spreaded beams Desired beams Interference
15
SVD and MIMO MIMO channel can be modeled as complex matrix
SVD and MIMO MIMO channel can be modeled as complex matrix The MIMO channel capacity CSI is needed at Tx MIMO Channel SVD
16
Beamforming for MIMO System
Beamforming for MIMO System The goal of beamforming is to diagonalize channel matrix Performance approach to the Shannon capacity
17
Beamforming for MIMO System
Beamforming for MIMO System Let channel matrix Post-processing Pre-processing
18
Beamforming for MIMO System
Beamforming for MIMO System By applying SVD, H can be decomposed as Data streams arrive orthogonally without interference between streams Array processing at transmitter Array processing at receiver
19
Matrix inverse and solving linear systems
Matrix inverse and solving linear systems Matrix inverse: So, to solve
20
Solving least-squares systems
Solving least-squares systems We tried to solve Ax=b when A was rectangular: Seeking solutions in least-squares sense: A x b =
21
Solving least-squares systems
Solving least-squares systems When A is full-rank, (normal equations). So: = 12 n2 1 n
22
Solving least-squares systems
Solving least-squares systems Substituting in the normal equations: 1/12 1 1/1 = 1/n2 n 1/n
23
Pseudoinverse The matrix we found is called the pseudoinverse of A.
Pseudoinverse The matrix we found is called the pseudoinverse of A. Definition using reduced SVD: If some of the i are zero, put zero instead of 1/I .
24
Pseudo-inverse Pseudoinverse A+ exists for any matrix A.
Pseudo-inverse Pseudoinverse A+ exists for any matrix A. Its properties: If A is mn then A+ is nm Acts a little like real inverse:
25
Solving least-squares systems
Solving least-squares systems When A is not full-rank: ATA is singular There are multiple solutions to the normal equations: Thus, there are multiple solutions to the least-squares. The SVD approach still works! In this case it finds the minimal-norm solution:
26
Thank you! The important thing is not to stop questioning -- Einstein
Thank you! The important thing is not to stop questioning -- Einstein
27
Backup Slides
28
Principal components Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). If the eigenvalues are more or less the same – there is no preferable direction. Note: the eigenvalues are always non-negative. Think why…
29
Principal components S looks like this: S looks like this:
Principal components There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this: is close to zero, much smaller than .
30
SVD and MIMO Consider a 2X2 MIMO case
SVD and MIMO Consider a 2X2 MIMO case Every input is a vector in the following 2-dimensional space (T1, T2) Matrix H rotates the basis vectors (T1, T2) into new set of vectors (R1, R2) Now the output is a vector in the new space of (R1, R2) New rotated vectors might not be orthogonal or they might not be independent of each other T1 T2 H T1 T2 R1 R2
31
SVD and MIMO If H is full rank matrix, R1 and R2 will be orthogonal (and can be termed as new basis vectors) T1 T2 h11T1 + h12T2 = R1 h21T1 + h22T2 = R2
32
SVD and MIMO: Example Example: 2018-11-30
T1 T2 T1 + T2 = R1,R2 0.7T T2 = v1 0.7T T2 = v2 Received signal will only lie on this line
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.