Optimum Passive Beamforming in Relation to Active-Passive Data Fusion

Slides:



Advertisements
Similar presentations
Bayesian Learning & Estimation Theory
Advertisements

Pattern Recognition and Machine Learning
CS479/679 Pattern Recognition Dr. George Bebis
Pattern Recognition and Machine Learning
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
Microphone Array Post-filter based on Spatially- Correlated Noise Measurements for Distant Speech Recognition Kenichi Kumatani, Disney Research, Pittsburgh.
Manifold Sparse Beamforming
OPTIMUM FILTERING.
(Includes references to Brian Clipp
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Visual Recognition Tutorial
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
A Multipath Sparse Beamforming Method
OPEN THALES UK Ltd. SSPD 2014, Edinburgh, Sep © THALES UK LTD AND/OR ITS SUPPLIERS. THIS INFORMATION CARRIER CONTAINS PROPRIETARY.
Normalised Least Mean-Square Adaptive Filtering
PATTERN RECOGNITION AND MACHINE LEARNING
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Eigenstructure Methods for Noise Covariance Estimation Olawoye Oyeyele AICIP Group Presentation April 29th, 2003.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Signal and Noise Models SNIR Maximization Least-Squares Minimization MMSE.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
An Introduction to Kalman Filtering by Arthur Pece
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Beamformer dimensionality ScalarVector Features1 optimal source orientation selected per location. Wrong orientation choice may lead to missed sources.
Autoregressive (AR) Spectral Estimation
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Yi Jiang MS Thesis 1 Yi Jiang Dept. Of Electrical and Computer Engineering University of Florida, Gainesville, FL 32611, USA Array Signal Processing in.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Data Modeling Patrice Koehl Department of Biological Sciences
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
Chapter 3: Maximum-Likelihood Parameter Estimation
Deep Feedforward Networks
Presented by : PARAMANAND SHARMA Roll No. : 207EE104
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Ch3: Model Building through Regression
Outlier Processing via L1-Principal Subspaces
CH 5: Multivariate Methods
Department of Civil and Environmental Engineering
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Fitting Curve Models to Edges
Lecture 10: Observers and Kalman Filters
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Hidden Markov Models Part 2: Algorithms
Modern Spectral Estimation
Optimum Passive Beamforming in Relation to Active-Passive Data Fusion
Filtering and State Estimation: Basic Concepts
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Day 33 Range Sensor Models 12/10/2018.
Master Thesis Presentation
Biointelligence Laboratory, Seoul National University
5.4 General Linear Least-Squares
Probabilistic Map Based Localization
Principles of the Global Positioning System Lecture 11
Kalman Filtering COS 323.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
Parametric Methods Berlin Chen, 2005 References:
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Kalman Filter: Bayes Interpretation
Spatial Signal Processing with Emphasis on Emitter Localization
Real-time Uncertainty Output for MBES Systems
Presentation transcript:

Optimum Passive Beamforming in Relation to Active-Passive Data Fusion Bryan A. Yocom Final Project Report EE381K-14 – MDDSP The University of Texas at Austin May 01, 2008

What is Data Fusion? Combining information from multiple sensors to better perform signal processing Active-Passive Data Fusion: Active Sonar – gives good range estimates Passive Sonar – gives good bearing estimates and information about spectral content Image from http://www.atlantic.drdc-rddc.gc.ca/factsheets/22_UDF_e.shtml

Passive Beamforming A form of spatial filtering Narrowband delay-and-sum beamformer Planar wavefront, linear array Suppose 2N+1 elements Sampled array output: xn = a(θ)sn + vn Steering vector: w(θ) = a(θ) (aka array pattern) Beamformer output: yn = wH(θ)xn Direction of arrival estimation: precision limited by length of array

The Goal Given that we have prior information about the location of contact: Design a passive sonar beamformer to provide minimum error in direction of arrival (DOA) estimation while additionally providing a low entropy measurement (accurate and precise) How? Use the prior information.

Passive Beamforming & Data Fusion Assume a data fusion framework has collected prior information about the state of a contact via Active sonar measurements Previous passive sonar measurements Prior information is represented in the form of a one-dimensional continuous random variable, Φ, with probability density function (PDF): The information provided by a passive horizontal line array measurement can be represented in terms of a likelihood function [Bell, et al, 2000]:

Bayesian Updates Posterior PDF: Differential entropy: Entropy improvement: Expected entropy improvement: Expected error in DOA estimate:

Adaptive Beamforming Most common form is Minimum Variance Distortionless Response (MVDR) beamformer (aka Capon beamformer) [Capon, 1969] Given cross-spectral matrix Rx and replica vector a(θ) Minimize wHRxw subject to wHa(θ)=1: Direction of arrival estimation: much more precise, but sensitive to mismatch (especially at high SNR) Rx is commonly “diagonally-loaded” to make MVDR more robust:

Sensitivity to mismatch Mismatch of 2 degrees [Li, et al, 2003] With limited computational resources how can we solve this problem?

Cued Beams [Yudichak, et al, 2007] Steer (adaptive) beams more densely in areas of high prior probability Previously cued beams were steered within a certain number of standard deviations from the mean of an assumed Gaussian prior PDF Improvements were seen, but a need still exists to fully cover bearing and generalize to any type of prior PDF

Generalized Cued Beams Goal: generalize cued beams for any type of prior pdf, i.e., non-gaussian Given prior pdf, p(Φ), the cumulative distribution function (CDF) is given by: By a change of variables, (switch the abscissa and ordinate), we obtain: If it assumed that Φ(F) can be solved for (which is always the case for a discrete pdf) we can define the steered angle of the nth beam according to:

Robust Capon Beamformer [Li, et al, 2003] Use a Robust Capon Beamformer (RCB) instead of the standard, diagonally loaded, MVDR beamformer. The RCB is essentially a more robust derivation of the MVDR beamformer for cases when the look direction is not precisely known. Assign an uncertainty set (matrix B) to the look direction: B is an N x L matrix: Solution to the optimization problem is somewhat involved Uses Lagrange multiplier methodology Eigendecomposition of (BHR-1B) – slightly more complex then MVDR Find the root of a non-trivial equation (e.g. via the Newton-Rhapson method)

Robust Capon Beamformer (RCB) Assign a different uncertainty set to each beam based on its distance from the two adjacent beams. Essentially, vary the beamwidth of each beam. Goal: Full azimuthal coverage. Although finely spaced beams will not cover every bearing, all directions will be covered by at least one beam. If a contact is detected the data fusion framework will trigger the cued beams to be steered in that direction.

Cued Beams with RCB Prior probability Maximum Response Axes Wide beams in areas of low probability Narrow beams in areas of high prior probability

Results – Entropy Improvement

Results – Expected DOA Error

Challenges Different amounts of noise are present in each beam of RCB because the beamwidths differ This needs to be accounted for by somehow weighting the beams Wider beams also lessen the ability for the beamformer to adapt to interferers γ term in likelihood function is SNR dependent The value of γ basically controls how much peaks in the beamformer output are emphasized. RCB seems to be especially sensitive to this term With proper choice of beam weightings and γ RCB could outperform ABF

Beamformers used in a Bayesian Tracker (time permitting)

Questions?