Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.

Slides:



Advertisements
Similar presentations
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Component Analysis (Review)
MIMO Communication Systems
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
1 Maarten De Vos SISTA – SCD - BIOMED K.U.Leuven On the combination of ICA and CPA Maarten De Vos Dimitri Nion Sabine Van Huffel Lieven De Lathauwer.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Chapter 5 Orthogonality
3D Geometry for Computer Graphics
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Matrices CS485/685 Computer Vision Dr. George Bebis.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Intro to Matrices Don’t be scared….
Linear Algebra and Image Processing
MIMO Multiple Input Multiple Output Communications © Omar Ahmad
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Presented By Wanchen Lu 2/25/2013
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Ilmenau University of Technology Communications Research Laboratory 1  A new multi-dimensional model order selection technique called closed- form PARAFAC.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Estimation of Number of PARAFAC Components
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Event retrieval in large video collections with circulant temporal encoding CVPR 2013 Oral.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Principle Component Analysis and its use in MA clustering Lecture 12.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Algorithm Development with Higher Order SVD
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Chapter 13 Discrete Image Transforms
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Introduction to Vectors and Matrices
Matrices and Vector Concepts
CS479/679 Pattern Recognition Dr. George Bebis
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal.
LECTURE 10: DISCRIMINANT ANALYSIS
L5 matrix.
Systems of First Order Linear Equations
Singular Value Decomposition
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
Lecture on Linear Algebra
SVD: Physical Interpretation and Applications
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Principal Component Analysis
Singular Value Decomposition SVD
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
LECTURE 09: DISCRIMINANT ANALYSIS
Lecture 13: Singular Value Decomposition (SVD)
Introduction to Vectors and Matrices
Math review - scalars, vectors, and matrices
Subject :- Applied Mathematics
Sebastian Semper1 and Florian Roemer1,2
Presentation transcript:

Estimation Techniques for High Resolution and Multi-Dimensional Array Signal Processing EMS Group – Fh IIS and TU IL Electronic Measurements and Signal Processing Group (EMS) LASP – UnB Laboratory of Array Signal Processing Prof. João Paulo C. Lustosa da Costa joaopaulo.dacosta@ene.unb.br

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 2 2

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 3 3

? Multi-dimensional array signal processing (1) Term “tensor” Mathematics: 1846: W. Voigt  abstract definition (multilinear mapping) Physics: 1915: M. Grossmann and A. Einstein Description of covariant and contracovariant indices Intuitive description “A tensor of order p is a collection of elements that are referenced by p indices.” Scalars Vectors Matrices 3rd order tensor 4th order ? …  “multi-way arrays” 4

Rectangular Array (URA) Multi-dimensional array signal processing (2) Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 m2 1 1 1 2 1 3 2 1 2 2 2 3 RX: Uniform Rectangular Array (URA) 3 1 3 2 9 x 3 matrix: maximum rank is 3. Solve maximum 3 sources! 3 3 n 1 2 3 5

Rectangular Array (URA) Multi-dimensional array signal processing (3) Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) m1 1 2 3 n 1 2 3 m2 1 2 3 RX: Uniform Rectangular Array (URA) 3 x 3 x 3 tensor: maximum rank is 5. Solve maximum 5 sources! J. B. Kruskal. Rank, decomposition, and uniqueness for 3-way and N-way arrays. Multiway Data Analysis, pages 7–18, 1989 6

= Multi-dimensional array signal processing (4) + + Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) For matrix model, nonrealistic assumptions such as orthogonality (PCA) or independence (ICA) should be done. For tensor model, separation is unique up to scalar and permutation ambiguities. = + + 7

Multi-dimensional array signal processing (5) Advantages: increased identifiability, separation without imposing additional constraints and improved accuracy (tensor gain) Array interpolation due to imperfections Application of tensor based techniques Estimation of number of sources d also known as model order selection multi-dimensional schemes: better accuracy Prewhitening schemes multi-dimensional schemes: better accuracy and lower complexity Parameter estimation Drastic reduce of computational complexity Multidimensional searches are decomposed into several one dimensional searches 8

 Multi-dimensional array signal processing (6) dn = ? dn = ?? Advantage: Insights due to the tensor notation Analogy between scalar representation and matrix representation Matrix equations are more compact  insights, manipulations Example: DFT The data is the same for all representations. However, the representation is more compact. For tensors, the representation is even more compact, since more dimensions are represented in a same equation. dn = ? dn = ??  9

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 10 10

Tensor operators (1) Let and , the the following operations are defined: Outer product: The order of the tensor resulting from the outer product is equal to the sum of the orders of both tensors. Inner product: The n-th dimension of both tensors is supressed. Higher order norm: Concatenation: All dimensions of both tensors should be equal for concatenation, except the n-th dimension. 11

Tensor operators (2) Example of inner product between two tensors: Sample covariance tensor for a 3rd order tensor Comparing with the sample covariance matrix using the unfoldings of the tensor Note that with the tensor notation, the tensor structure is already clear, while the matrix notation does not show the natural multidimensional structure of the data. Moreover, the sample covariance matrix is a multimode matrix of the sample covariance tensor.

Tensor operators (3) Example of outer product Outer product between two vectors: a rank one matrix Outer product between three vectors: a rank one tensor 13

Tensor operators (4) Unfoldings do tensor: “1-mode vectors” “2-mode vectors” “3-mode vectors” The amount of unfoldings is equal to the order of the tensor. 14

Tensor operators (5) Unfolding of a third order tensor 15

RX: Uniform Rectangular Array (URA) Tensor operators (6) Unfolding of a tensor m1 1 2 3 n 1 2 3 m2 1 2 3 RX: Uniform Rectangular Array (URA) Experts without knowledge of tensors usually use only the following unfolding, i.e. a single projection of the tensor: However, by exploiting all unfoldings, the tensor gain is achieved. 16

Tensor operators (7) N-mode product Let be a tensor and a matri, the mode-n product is defined as Tensor representation Scalar representation Matrix representation i.e., all the n-mode vectors are multiplied by the left by . Illustrative examples of the 1-mode, 2-mode and 3-mode products by matrices between a third order tensor: 3 1 2 17

Tensor operators (8) Khatri-Rao product between two matrices and is defined as: where and stand for the i-th columns of and 18

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 19 19

Parallel Factor Analysis (1) Unfolding of a rank one tensor Outer product between three vectors: rank one tensor Example: 20

Parallel Factor Analysis (2) Unfolding of a rank one tensor Example: Similarly, for the other unfoldings, it holds: 21

Parallel Factor Analysis (3) Unfolding of a rank d tensor A rank d tensor can be tensor decomposed into a sum of d rank one tensors as follows: where The unfoldings of can be written as: 22

Parallel Factor Analysis (4) Parallel Factor Analysis (PARAFAC) Given the tensor and the model order d, we desire to obtain the following matrices in order to obtain the factor matrices: Applications of the PARAFAC model: wireless communications (detection, estimation), harmonical retrieval, RADAR, exploratory data analysis, source separation, espetroscopy, pattern recognition, biomedical signals, chemistry, psychometrics, … 23

Parallel Factor Analysis (5) Alternating Least Squares (ALS) ALS is a simple technique to estimate the factor matrices by exploiting the equations of the unfoldings of the tensor. The equations of the unfoldings of the tensors can be rewritten by using pseudo-inverses as follows Note that ALS should be initialized with at least two factor matrices, for instance, matrices B and C. In this case, the factor matrices can be initialized with random elements. A good initialization implies into fast covergence. 24

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 25 25

Kruskal’s rank Unicity of the PARAFAC decomposition The Kruskal-rank is defined as largest number k such that all sets of k columns are linearly independent and is represented by The PARAFAC decomposition is unique and can be applied if the following relationship is satisfied where d is the tensor rank and A, B and C are the factor matrices. Scaling and permutation ambiguities Scaling: Permutation: 26

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 27 27

Tensor models (1) Scalar representation of harmonical retrieval data model where , and . Mapping between the spatial frequencies and the directions of arrival (azimuth and elevation ) F. Roemer. Rank, Advances in subspace-based parameter estimation: Tensor-ESPRIT-type methods and non-circular sources, Master’s dissertation, TU Ilmenau, 2006 28

Tensor models (2) Matrix representation where Tensor representation using n-mode products where is the identity tensor, which is defined as a tensor with 1’s on the superdiagonal and 0’s everywhere else Another possible representation is given by 29

Tensor models (3) 30

Tensor models (4) 31

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 32 32

HOSVD (1) Complete HOSVD Economic HOSVD Low rank approximation 33

HOSVD (2) The HOSVD is computed by applying one SVD for each unfolding, which implies into R+1 SVDs. Example of the HOSVD for a 3rd order tensor: For the approximation to a rank d, d columns of the matrices are selected for r = 1, …, R+1 as well as the corresponding vectors of The HOSVD allows the denoising in all dimensions of the tensor. 34

HOSVD versus PARAFAC-ALS 3 HOSVD The core tensor is full, i.e., seldom it is diagonal. Direct and easy to compute via SVD Factor matrices are tall or square Used for denoising or data compression 2 1 3 PARAFAC-ALS 2 The core tensor is diagonal. Factor matrices can be iteratively computed. The factor matrices can be fat, i.e. underdetermined case. The tensor rank and the natural source of the data are revealed. 1 35

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 36 36

Merging dimensions Let us consider without merging We can stack dimensions and obtain with merging where in both cases, we assume that 37

Least Squares Khatri-Rao Factorization Given , we desire and Reshaping the merged vector Since the product should be a rank-one matrix, we can apply the SVD-based rank one approximation Therefore, 38

Content of the intensive course (2) Multidimensional Array Signal Processing Tensor operators Tensor decompositions Parallel factor analysis (PARAFAC) Kruskal’s rank Tensor models Higher order singular value decomposition (HOSVD) Merging dimensions Least squares Khatri-Rao factorization (LSKRF) Tensor based techniques Tensor based model order selection 39 39

Model order estimation for tensors Tensor based Model Order Selection (1) Model order estimation for tensors Global eigenvalues: 40

Tensor based Model Order Selection (2) Comparison between global eigenvalues and the eigenvalues of a single unfolding 41

Tensor based Model Order Selection (3) Comparison between tensor based and matrix based model order selection schemes 42