Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Eigen Decomposition and Singular Value Decomposition
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Component Analysis (Review)
MIMO Communication Systems
Lecture 7: Basis Functions & Fourier Series
Dimension reduction (1)
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Random Matrices Hieu D. Nguyen Rowan University Rowan Math Seminar
Independent Component Analysis (ICA) and Factor Analysis (FA)
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Stats & Linear Models.
Today Wrap up of probability Vectors, Matrices. Calculus
Review of Probability.
MIMO Multiple Input Multiple Output Communications © Omar Ahmad
Summarized by Soo-Jin Kim
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Week 2ELE Adaptive Signal Processing 1 STOCHASTIC PROCESSES AND MODELS.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Elements of Stochastic Processes Lecture II
Week 21 Stochastic Process - Introduction Stochastic processes are processes that proceed randomly in time. Rather than consider fixed random variables.
ارتباطات داده (883-40) فرآیندهای تصادفی نیمسال دوّم افشین همّت یار دانشکده مهندسی کامپیوتر 1.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
Reduces time complexity: Less computation Reduces space complexity: Less parameters Simpler models are more robust on small datasets More interpretable;
The Mathematics for Chemists (I) (Fall Term, 2004) (Fall Term, 2005) (Fall Term, 2006) Department of Chemistry National Sun Yat-sen University 化學數學(一)
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Discrete-time Random Signals
Signal & Weight Vector Spaces
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Independent Component Analysis Independent Component Analysis.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Feature Extraction 主講人:虞台文.
Chapter 13 Discrete Image Transforms
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
CLASSIFICATION OF ECG SIGNAL USING WAVELET ANALYSIS
Math for CS Fourier Transforms
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing.
Introduction to Vectors and Matrices
Biointelligence Laboratory, Seoul National University
Linear Algebra Review.
Matrices and Vector Concepts
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Adaptive & Array Signal Processing AASP
LECTURE 11: Advanced Discriminant Analysis
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing DVT Research Group – Fh IIS and TU IL Wireless Distribution.
Matrices and vector spaces
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.
Optimum Passive Beamforming in Relation to Active-Passive Data Fusion
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Systems of First Order Linear Equations
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
ECE 417 Lecture 4: Multivariate Gaussians
SVD: Physical Interpretation and Applications
CS485/685 Computer Vision Dr. George Bebis
Singular Value Decomposition SVD
Equivalent State Equations
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Introduction to Vectors and Matrices
Principal Component Analysis
Unfolding with system identification
Subject :- Applied Mathematics
Presentation transcript:

Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal Processing Group (EMS) LASP – UnB Laboratory of Array Signal Processing Prof. João Paulo C. Lustosa da Costa joaopaulo.dacosta@ene.unb.br

Content of the intensive course (1) Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 2 2

Content of the intensive course (1) Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 3 3

Introduction to multichannel systems (1) Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Array gain: 3 for each side Diversity gain: same information for each path Spatial multiplexing gain: different information for each path 4

Introduction to multichannel systems (2) Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Array gain: 3 for each side Diversity gain: same information for each path Spatial multiplexing gain: different information for each path 5

Introduction to multichannel systems (3) Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Interferer 6

Introduction to multichannel systems (4) Standard (Matrix) Array Signal Processing Four gains: array gain, diversity gain, spatial multiplexing gain and interference reduction gain RX TX Interferer 7

Introduction to multichannel systems (5) MIMO Channel Model Direction of Departure (DOD) Transmit array: 1-D or 2-D Direction of Arrival (DOA) Receive array: 1-D or 2-D Frequency Delay Time Doppler shift 8

Introduction to multichannel systems (6) Multi-dimensional array signal processing Dimensions depend on the type of application MIMO Received data: two spatial dimensions, frequency and time Channel: 4 spatial dimensions, frequency and time Microphone array Received data: one spatial dimension and time After Time Frequency Analysis Space, time and frequency EEG (similarly as microphone array) Psychometrics Chemistry Food industry 9

Introduction to multichannel systems (7) Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 m2 1 1 1 2 1 3 2 1 2 2 2 3 RX: Uniform Rectangular Array (URA) 3 1 3 2 3 3 n 1 2 3 9 x 3 matrix: maximum rank is 3. Solve maximum 3 sources! 10

Rectangular Array (URA) Introduction to multichannel systems (8) Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 1 2 3 n 1 2 3 m2 1 2 3 RX: Uniform Rectangular Array (URA) 3 x 3 x 3 tensor: maximum rank is 5. Solve maximum 5 sources! J. B. Kruskal. Rank, decomposition, and uniqueness for 3-way and N-way arrays. Multiway Data Analysis, pages 7–18, 1989 11

= Introduction to multichannel systems (9) + + Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) For matrix model, nonrealistic assumptions such as orthogonality (PCA) or independence (ICA) should be done. For tensor model, separation is unique up to scalar and permutation ambiguities. = + + 12

Introduction to multichannel systems (10) Multi-dimensional array signal processing Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) Array interpolation due to imperfections Application of tensor based techniques Estimation of number of sources d also known as model order selection multi-dimensional schemes: better accuracy Prewhitening schemes multi-dimensional schemes: better accuracy and lower complexity Parameter estimation Drastic reduction of computational complexity Multidimensional searches are decomposed into several one dimensional searches 13

Content of the intensive course (1) Introduction to multi-channel systems Mathematical background High resolution array signal processing Model order selection Beamforming Direction of arrival (DOA) estimation Signal reconstruction via pseudo inverse Prewhitening Independent component analysis (ICA) for instantaneous mixtures ICA for convolutive mixtures 14 14

Mathematical Background: Stationary Processes (1) Stochastic (or random) process Evolution of a statistical phenomenon according to probabilistic laws Before starting the process, it is not possible to define the exactly way it evolves. Infinite number of realizations of the process Strictly stationary Statistical properties are invariant to the time shift If the Probability Density Function (PDF) f(x) is known All the moments can be computed. In practice, the PDF is not known. Therefore, in most cases, only the first and the second moments can be estimated with samples. 15

Mathematical Background: Stationary Processes (2) Statistical functions The first moment also known as mean-value function E{ } stands for the expected-value operator (or statistical value operator) and u(n) is the sample at the n-th instant. The autocorrelation function The autocovariance function Note that all the three functions are assumed constant with time. Therefore, they do not depend on n. 16

Mathematical Background: Stationary Processes (3) Relation between the statistical functions The relation between mean-value, autocorrelation and autocovariance functions is given by Proof of the relation: If mean is zero the autocovariance and the correlation functions are equal. 17

Mathematical Background: Stationary Processes (4) Wide-sense Stationary and mean estimate Note that if If these equations are satisfied, then the process is wide-sense stationary or stationary to the second-order. In practice, only a limited number of samples are available. Therefore, the mean, the autocovariance and the autocorrelation are estimated. Estimate of the mean 18

Mathematical Background: Stationary Processes (4) Wide-sense Stationary and mean estimate Note that if If these equations are satisfied, then the process is wide-sense stationary or stationary to the second-order. In practice, only a limited number of samples are available. Therefore, the mean, the autocovariance and the autocorrelation are estimated. Estimate of the mean 19

Mathematical Background: Stationary Processes (5) Correlation matrix in single channel systems L by 1 observation vector Correlation matrix The main diagonal contains always real-valued elements. 20

Mathematical Background: Stationary Processes (6) Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. Proof: 21

Mathematical Background: Stationary Processes (7) Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. As a consequence of the Hermitian property 22

Mathematical Background: Stationary Processes (7) Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. As a consequence of the Hermitian property 23

Mathematical Background: Stationary Processes (8) Correlation matrix in single channel systems The correlation matrix is Hermitian, i.e. The correlation matrix is Toeplitz, i.e. the elements of the main diagonal are equal as well as the elements of each diagonal parallel to the main diagonal. Important: Wide sense stationary  R is Toeplitz R is Toeplitz  Wide sense stationary 24

Mathematical Background: Stationary Processes (9) Correlation matrix in single channel systems The correlation matrix is always nonnegative definite and almost always positive definite. We define the scalar , where x is constant, then We know that Nonnegative definite or positive semidefinite Also if If positive definite Also if 25

Mathematical Background: Stationary Processes (9) Example of correlation matrix We consider the following data model where u(n) is the received signal, is the signal of interest and v(n) is the zero mean i.i.d. noise component with variance . Note: independent and identically distributed (i.i.d.) Computing the autocorrelation function 26

Mathematical Background: Stationary Processes (10) Example of correlation matrix 27

Mathematical Background: Stationary Processes (11) Example of correlation matrix Parameter estimation: Given the noise variance and computing r(0), estimate ||2. Given the estimate of ||2 and computing r(l), estimate . Assuming the case where - All lines and all columns are linearly dependent. - R is rank 1. - Only one eigenvalue is not zero. - The model order is one. 28

Mathematical Background: Complex Variables (1) The complex variables Example of application: complex variable z of the z transform Consider where x = Re[z] and y = Im[z]. Assuming a certain function f, such that Therefore: The derivative of f(z) is defined as Derivative: independent the way z approaches to z0. Writing in terms of u, v, x and y, we have that: 29

Mathematical Background: Complex Variables (2) The complex variables The derivative of f(z) can rewritten as Case 1: 30

Mathematical Background: Complex Variables (3) The complex variables Case 2: To approach in a independent way, both cases should be satisfied Therefore: These equations are known as Cauchy-Riemann equations. Cauchy-Riemann conditions a necessary and sufficient for the existence of the derivative. 31

Mathematical Background: Complex Variables (4) Optimization problem: Differentiate a cost function with respect to a complex-valued vector w Each element of the vector: The real part of each element: The imaginary part of each element: Complex derivatives in terms of real derivatives and Properties of the complex vector differentiation 32

Mathematical Background: Complex Variables (5) Complex derivatives of vectors in terms of real derivatives and Properties of the complex vector differentiation 33

Mathematical Background: Complex Variables (6) Example 1: Example 2:

Mathematical Background: Complex Variables (6) Example 1: Zero matrix Zero vector Example 2: Example 3: Wiener-Hopf equations 35

Mathematical Background: Complex Variables (7) Relation between gradient and vector differentiation and Gradient Vector differentiation 36

Mathematical Background: Lagrange Multipliers (1) Method of Lagrange Multipliers solves constrained optimization problems Optimization with a single equality constraint f(w) is a quadratic function of a vector w. Subject to the constraint: Rewriting the problem as known as primal equation. Physical interpretation: w is the vector with complex weights, s are the steering vectors, and f(w) is the mean-squared value of the beamformer output. Method of Lagrange Multipliers transforms the constrained problem above into an unconstrained problem by applying the Lagrange multipliers! 37

Mathematical Background: Lagrange Multipliers (2) Method of Lagrange Multipliers: Using f(w) and c(w) to build a new function where We can rewrite our new function as To minimize the function h(w), we apply its derivative with respect to w* equal to zero. known as adjoint equation 38

Mathematical Background: Lagrange Multipliers (3) Method of Lagrange Multipliers: Observing the adjoint equation The solution: when the two curves are parallel to each other (tangent of contour lines), i.e., when the derivatives are equal! 39

Mathematical Background: Lagrange Multipliers (4) Method of Lagrange Multipliers: Example Real valued case: Minimize the dimensions of a 2000 m3 oil reservoir (tank) without top: The area to be minimized is given by: The restriction: 40

Mathematical Background: Lagrange Multipliers (5) Method of Lagrange Multipliers: Example Using the restriction to find the solution: 41

Mathematical Background: Lagrange Multipliers (6) Example: Find the vector that minimizes the function: given the following constraint: The adjoint equation is given by Applying the previous vector differentiation rules 42

Mathematical Background: Lagrange Multipliers (7) Replacing the optimal wH in the primal equation: Replacing  back to the optimal w equation: 43

Mathematical Background: Linear Algebra (1) Eigenvalue Decomposition (EVD) Comparison to the Fourier Transform (FT) For the Fourier Transform, the data is projected over complex exponential functions (oscilations). Each vector of the FT maps a certain frequency. The vectors of the FT do not take into account the structure of the data! Karhunen-Loeve Transform (KLT), also known as Hotelling Transform or Eigenvector Transform It takes into account the data structure. Definition of an eigenvector where  is the eigenvalue and v is the eigenvector.

Mathematical Background: Linear Algebra (2) Eigenvalue Decomposition (EVD) To find out the eigenvalues Once the eigenvalues are computed, the eigenvectors are given by the following relation replacing each eigenvalue at once. Once the eigenvalues and the eigenvectors are computed

Mathematical Background: Linear Algebra (3) Eigenvalue Decomposition (EVD): Example Compute the eigenvalues and eigenvectors of the following matrix Computing the eigenvalues

Mathematical Background: Linear Algebra (4) Eigenvalue Decomposition (EVD): Example Compute the eigenvectors The eigenvector is unitary! The same procedure for the next eigenvector!

Mathematical Background: Linear Algebra (5) Eigenvalue Decomposition (EVD): Example Compute the eigenvectors

Mathematical Background: Linear Algebra (1) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (6) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (7) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (8) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (9) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (10) Eigenvalue Decomposition (EVD): Example What is the physical meaning of the EVD?

Mathematical Background: Linear Algebra (11) Eigenvalue Decomposition (EVD): Example Eigenvalues of the noise white noise correlation matrix One eigenvalue with multiplicity M and M different eigenvectors. Eigenvalues of a complex sinusoid matrix One eigenvalue M and all the other eigenvalues are equal to zero.

Mathematical Background: Linear Algebra (12) Eigenvalue Decomposition (EVD): Properties The eigenvalues of the correlation matrix R to the power of k are the eigenvalues of R to the power of k.

Mathematical Background: Linear Algebra (13) Eigenvalue Decomposition (EVD): Properties The eigenvectors are linearly independent. There is only the trivial solution, i.e. all constants are zero. If there is no other solution, they are linearly independent.

Mathematical Background: Linear Algebra (14) Eigenvalue Decomposition (EVD): Other properties The eigenvalues of the correlation matrix are real and nonnegative. The eigenvectors are orthogonal to each other. The eigenvectors diagonalize the correlation matrix. Projection matrix Trace of the correlation matrix is equal to the sum of the eigenvalues.

Mathematical Background: Linear Algebra (15) Singular Value Decomposition (SVD) Example using the EVD Therefore, it is also possible If A is full rank, then the rank of A is min{M,N}. The EVD of R is given by The EVD of R’ is given by ’

Mathematical Background: Linear Algebra (16) Singular Value Decomposition (SVD) Therefore, we can represent the SVD as U are the singular vectors of the left side, V are the singular vectors of the right side and S is a diagonal matrix with the singular values.