Download presentation
Presentation is loading. Please wait.
Published byMyles Alexander Modified over 6 years ago
1
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal Processing Group (EMS) LASP – UnB Laboratory of Array Signal Processing Prof. João Paulo C. Lustosa da Costa
2
Content of the intensive course (1)
Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 2 2
3
Content of the intensive course (1)
Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 3 3
4
Introduction to multichannel systems (1)
Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Array gain: 3 for each side Diversity gain: same information for each path Spatial multiplexing gain: different information for each path 4
5
Introduction to multichannel systems (2)
Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Array gain: 3 for each side Diversity gain: same information for each path Spatial multiplexing gain: different information for each path 5
6
Introduction to multichannel systems (3)
Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Interferer 6
7
Introduction to multichannel systems (4)
Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Interferer 7
8
Introduction to multichannel systems (5)
MIMO Channel Model Direction of Departure (DOD) Transmit array: 1-D or 2-D Direction of Arrival (DOA) Receive array: 1-D or 2-D Frequency Delay Time Doppler shift 8
9
Introduction to multichannel systems (6)
Multi-dimensional array signal processing Dimensions depend on the type of application MIMO Received data: two spatial dimensions, frequency and time Channel: 4 spatial dimensions, frequency and time Microphone array Received data: one spatial dimension and time After Time Frequency Analysis Space, time and frequency EEG (similarly as microphone array) Psychometrics Chemistry Food industry 9
10
Introduction to multichannel systems (7)
Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 m2 1 1 1 2 1 3 2 1 2 2 2 3 RX: Uniform Rectangular Array (URA) 3 1 3 2 3 3 n 1 2 3 9 x 3 matrix: maximum rank is 3. Solve maximum 3 sources! 10
11
Rectangular Array (URA)
Introduction to multichannel systems (8) Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 1 2 3 n 1 2 3 m2 1 2 3 RX: Uniform Rectangular Array (URA) 3 x 3 x 3 tensor: maximum rank is 5. Solve maximum 5 sources! J. B. Kruskal. Rank, decomposition, and uniqueness for 3-way and N-way arrays. Multiway Data Analysis, pages 7–18, 1989 11
12
= Introduction to multichannel systems (9) + +
Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) For matrix model, nonrealistic assumptions such as orthogonality (PCA) or independence (ICA) should be done. For tensor model, separation is unique up to scalar and permutation ambiguities. = + + 12
13
Introduction to multichannel systems (10)
Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) Array interpolation due to imperfections Application of tensor based techniques Estimation of number of sources d also known as model order selection multi-dimensional schemes: better accuracy Prewhitening schemes multi-dimensional schemes: better accuracy and lower complexity Parameter estimation Drastic reduction of computational complexity Multidimensional searches are decomposed into several one dimensional searches 13
14
Content of the intensive course (1)
Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 14 14
15
Mathematical Background: Stationary Processes (1)
Stochastic (or random) process Evolution of a statistical phenomenon according to probabilistic laws Before starting the process, it is not possible to define the exactly way it evolves. Infinite number of realizations of the process Strictly stationary Statistical properties are invariant to the time shift If the Probability Density Function (PDF) f(x) is known All the moments can be computed. In practice, the PDF is not known. Therefore, in most cases, only the first and the second moments can be estimated with samples. 15
16
Mathematical Background: Stationary Processes (2)
Statistical functions The first moment also known as mean-value function E{ } stands for the expected-value operator (or statistical value operator) and u(n) is the sample at the n-th instant. The autocorrelation function The autocovariance function Note that all the three functions are assumed constant with time. Therefore, they do not depend on n. 16
17
Mathematical Background: Stationary Processes (3)
Relation between the statistical functions The relation between mean-value, autocorrelation and autocovariance functions is given by Proof of the relation: If mean is zero the autocovariance and the correlation functions are equal. 17
18
Mathematical Background: Stationary Processes (4)
Wide-sense Stationary and mean estimate Note that if If these equations are satisfied, then the process is wide-sense stationary or stationary to the second-order. In practice, only a limited number of samples are available. Therefore, the mean, the autocovariance and the autocorrelation are estimated. Estimate of the mean 18
19
Mathematical Background: Stationary Processes (4)
Wide-sense Stationary and mean estimate Note that if If these equations are satisfied, then the process is wide-sense stationary or stationary to the second-order. In practice, only a limited number of samples are available. Therefore, the mean, the autocovariance and the autocorrelation are estimated. Estimate of the mean 19
20
Mathematical Background: Stationary Processes (5)
Correlation matrix in single channel systems L by 1 observation vector Correlation matrix The main diagonal contains always real-valued elements. 20
21
Mathematical Background: Stationary Processes (6)
Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. Proof: 21
22
Mathematical Background: Stationary Processes (7)
Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. As a consequence of the Hermitian property 22
23
Mathematical Background: Stationary Processes (7)
Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. As a consequence of the Hermitian property 23
24
Mathematical Background: Stationary Processes (8)
Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. The correlation matrix is Toeplitz, i.e. the elements of the main diagonal are equal as well as the elements of each diagonal parallel to the main diagonal. Important: Wide sense stationary R is Toeplitz R is Toeplitz Wide sense stationary 24
25
Mathematical Background: Stationary Processes (9)
Correlation matrix in single channel systems The correlation matrix is always nonnegative definite and almost always positive definite. We define the scalar , where x is constant, then We know that Nonnegative definite or positive semidefinite Also if If positive definite Also if 25
26
Mathematical Background: Stationary Processes (9)
Example of correlation matrix We consider the following data model where u(n) is the received signal, is the signal of interest and v(n) is the zero mean i.i.d. noise component with variance . Note: independent and identically distributed (i.i.d.) Computing the autocorrelation function 26
27
Mathematical Background: Stationary Processes (10)
Example of correlation matrix 27
28
Mathematical Background: Stationary Processes (11)
Example of correlation matrix Parameter estimation: Given the noise variance and computing r(0), estimate ||2. Given the estimate of ||2 and computing r(l), estimate . Assuming the case where - All lines and all columns are linearly dependent. - R is rank 1. - Only one eigenvalue is not zero. - The model order is one. 28
29
Mathematical Background: Complex Variables (1)
The complex variables Example of application: complex variable z of the z transform Consider where x = Re[z] and y = Im[z]. Assuming a certain function f, such that Therefore: The derivative of f(z) is defined as Derivative: independent the way z approaches to z0. Writing in terms of u, v, x and y, we have that: 29
30
Mathematical Background: Complex Variables (2)
The complex variables The derivative of f(z) can rewritten as Case 1: 30
31
Mathematical Background: Complex Variables (3)
The complex variables Case 2: To approach in a independent way, both cases should be satisfied Therefore: These equations are known as Cauchy-Riemann equations. Cauchy-Riemann conditions a necessary and sufficient for the existence of the derivative. 31
32
Mathematical Background: Complex Variables (4)
Optimization problem: Differentiate a cost function with respect to a complex-valued vector w Each element of the vector: The real part of each element: The imaginary part of each element: Complex derivatives in terms of real derivatives and Properties of the complex vector differentiation 32
33
Mathematical Background: Complex Variables (5)
Complex derivatives of vectors in terms of real derivatives and Properties of the complex vector differentiation 33
34
Mathematical Background: Complex Variables (6)
Example 1: Example 2:
35
Mathematical Background: Complex Variables (6)
Example 1: Zero matrix Zero vector Example 2: Example 3: Wiener-Hopf equations 35
36
Mathematical Background: Complex Variables (7)
Relation between gradient and vector differentiation and Gradient Vector differentiation 36
37
Mathematical Background: Lagrange Multipliers (1)
Method of Lagrange Multipliers solves constrained optimization problems Optimization with a single equality constraint f(w) is a quadratic function of a vector w. Subject to the constraint: Rewriting the problem as known as primal equation. Physical interpretation: w is the vector with complex weights, s are the steering vectors, and f(w) is the mean-squared value of the beamformer output. Method of Lagrange Multipliers transforms the constrained problem above into an unconstrained problem by applying the Lagrange multipliers! 37
38
Mathematical Background: Lagrange Multipliers (2)
Method of Lagrange Multipliers: Using f(w) and c(w) to build a new function where We can rewrite our new function as To minimize the function h(w), we apply its derivative with respect to w* equal to zero. known as adjoint equation 38
39
Mathematical Background: Lagrange Multipliers (3)
Method of Lagrange Multipliers: Observing the adjoint equation The solution: when the two curves are parallel to each other (tangent of contour lines), i.e., when the derivatives are equal! 39
40
Mathematical Background: Lagrange Multipliers (4)
Method of Lagrange Multipliers: Example Real valued case: Minimize the dimensions of a 2000 m3 oil reservoir (tank) without top: The area to be minimized is given by: The restriction: 40
41
Mathematical Background: Lagrange Multipliers (5)
Method of Lagrange Multipliers: Example Using the restriction to find the solution: 41
42
Mathematical Background: Lagrange Multipliers (6)
Example: Find the vector that minimizes the function: given the following constraint: The adjoint equation is given by Applying the previous vector differentiation rules 42
43
Mathematical Background: Lagrange Multipliers (7)
Replacing the optimal wH in the primal equation: Replacing back to the optimal w equation: 43
44
Mathematical Background: Linear Algebra (1)
Eigenvalue Decomposition (EVD) Comparison to the Fourier Transform (FT) For the Fourier Transform, the data is projected over complex exponential functions (oscilations). Each vector of the FT maps a certain frequency. The vectors of the FT do not take into account the structure of the data! Karhunen-Loeve Transform (KLT), also known as Hotelling Transform or Eigenvector Transform It takes into account the data structure. Definition of an eigenvector where is the eigenvalue and v is the eigenvector.
45
Mathematical Background: Linear Algebra (2)
Eigenvalue Decomposition (EVD) To find out the eigenvalues Once the eigenvalues are computed, the eigenvectors are given by the following relation replacing each eigenvalue at once. Once the eigenvalues and the eigenvectors are computed
46
Mathematical Background: Linear Algebra (3)
Eigenvalue Decomposition (EVD): Example Compute the eigenvalues and eigenvectors of the following matrix Computing the eigenvalues
47
Mathematical Background: Linear Algebra (4)
Eigenvalue Decomposition (EVD): Example Compute the eigenvectors The eigenvector is unitary! The same procedure for the next eigenvector!
48
Mathematical Background: Linear Algebra (5)
Eigenvalue Decomposition (EVD): Example Compute the eigenvectors
49
Mathematical Background: Linear Algebra (1)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
50
Mathematical Background: Linear Algebra (6)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
51
Mathematical Background: Linear Algebra (7)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
52
Mathematical Background: Linear Algebra (8)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
53
Mathematical Background: Linear Algebra (9)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
54
Mathematical Background: Linear Algebra (10)
Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?
55
Mathematical Background: Linear Algebra (11)
Eigenvalue Decomposition (EVD): Example Eigenvalues of the noise white noise correlation matrix One eigenvalue with multiplicity M and M different eigenvectors. Eigenvalues of a complex sinusoid matrix One eigenvalue M and all the other eigenvalues are equal to zero.
56
Mathematical Background: Linear Algebra (12)
Eigenvalue Decomposition (EVD): Properties The eigenvalues of the correlation matrix R to the power of k are the eigenvalues of R to the power of k.
57
Mathematical Background: Linear Algebra (13)
Eigenvalue Decomposition (EVD): Properties The eigenvectors are linearly independent. There is only the trivial solution, i.e. all constants are zero. If there is no other solution, they are linearly independent.
58
Mathematical Background: Linear Algebra (14)
Eigenvalue Decomposition (EVD): Other properties The eigenvalues of the correlation matrix are real and nonnegative. The eigenvectors are orthogonal to each other. The eigenvectors diagonalize the correlation matrix. Projection matrix Trace of the correlation matrix is equal to the sum of the eigenvalues.
59
Mathematical Background: Linear Algebra (15)
Singular Value Decomposition (SVD) Example using the EVD Therefore, it is also possible If A is full rank, then the rank of A is min{M,N}. The EVD of R is given by The EVD of R’ is given by ’
60
Mathematical Background: Linear Algebra (16)
Singular Value Decomposition (SVD) Therefore, we can represent the SVD as U are the singular vectors of the left side, V are the singular vectors of the right side and S is a diagonal matrix with the singular values.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.