Optimum Passive Beamforming in Relation to Active-Passive Data Fusion

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Microphone Array Post-filter based on Spatially- Correlated Noise Measurements for Distant Speech Recognition Kenichi Kumatani, Disney Research, Pittsburgh.
Manifold Sparse Beamforming
OPTIMUM FILTERING.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Bayesian Wrap-Up (probably). 5 minutes of math... Marginal probabilities If you have a joint PDF:... and want to know about the probability of just one.
(Includes references to Brian Clipp
Machine Learning CMPT 726 Simon Fraser University CHAPTER 1: INTRODUCTION.
Bayesian learning finalized (with high probability)
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
A Multipath Sparse Beamforming Method
OPEN THALES UK Ltd. SSPD 2014, Edinburgh, Sep © THALES UK LTD AND/OR ITS SUPPLIERS. THIS INFORMATION CARRIER CONTAINS PROPRIETARY.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Eigenstructure Methods for Noise Covariance Estimation Olawoye Oyeyele AICIP Group Presentation April 29th, 2003.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
IID Samples In supervised learning, we usually assume that data points are sampled independently and from the same distribution IID assumption: data are.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Signal and Noise Models SNIR Maximization Least-Squares Minimization MMSE.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Learning Theory Reza Shadmehr Linear and quadratic decision boundaries Kernel estimates of density Missing data.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
INTRODUCTION TO Machine Learning 3rd Edition
An Introduction to Kalman Filtering by Arthur Pece
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Autoregressive (AR) Spectral Estimation
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Independent Component Analysis Independent Component Analysis.
Geology 6600/7600 Signal Analysis 23 Oct 2015
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Locating a Shift in the Mean of a Time Series Melvin J. Hinich Applied Research Laboratories University of Texas at Austin
Bayesian Extension to the Language Model for Ad Hoc Information Retrieval Hugo Zaragoza, Djoerd Hiemstra, Michael Tipping Microsoft Research Cambridge,
Bayesian Estimation and Confidence Intervals Lecture XXII.
Pattern Recognition Probability Review
Chapter 3: Maximum-Likelihood Parameter Estimation
A Physical Interpretation of Beamforming, BLAST and SVD Algorithms
Optimum Passive Beamforming in Relation to Active-Passive Data Fusion
Ch3: Model Building through Regression
Department of Civil and Environmental Engineering
TUTORIAL 3 BEAMFORMING 9/15/2018 LECTURES 1.
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Outline Parameter estimation – continued Non-parametric methods.
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Hidden Markov Models Part 2: Algorithms
دانشگاه صنعتی امیرکبیر Instructor : Saeed Shiry
Modern Spectral Estimation
Filtering and State Estimation: Basic Concepts
Computing and Statistical Data Analysis / Stat 8
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Day 33 Range Sensor Models 12/10/2018.
Master Thesis Presentation
A Short Introduction to the Bayes Filter and Related Models
EE513 Audio Signals and Systems
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Probabilistic Map Based Localization
SPM2: Modelling and Inference
Parametric Methods Berlin Chen, 2005 References:
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Spatial Signal Processing with Emphasis on Emitter Localization
Uncertainty Propagation
Optimization under Uncertainty
Presentation transcript:

Optimum Passive Beamforming in Relation to Active-Passive Data Fusion Bryan A. Yocom Literature Survey Report EE381K-14 – MDDSP The University of Texas at Austin March 04, 2008

What is Data Fusion? Combining information from multiple sensors to better perform signal processing Active-Passive Data Fusion: Active Sonar – good range estimates Passive Sonar – good bearing estimates Image from http://www.atlantic.drdc-rddc.gc.ca/factsheets/22_UDF_e.shtml

Passive Beamforming A form of spatial filtering Narrowband delay-and-sum beamformer Planar wavefront, linear array Suppose 2N+1 elements Sampled array output: xn = a(θ)sn + vn Steering vector: w(θ) Beamformer output: yn = wH(θ)xn Direction of arrival estimation: precision limited by length of array

Adaptive Beamforming Most common form is Minimum Variance Distortionless Response (MVDR) beamformer (aka Capon beamformer) [Capon, 1969] Given cross-spectral matrix Rx and replica vector a(θ) Minimize w*Rxw subject to w*a(θ)=1: Direction of arrival estimation: much more precise, but very sensitive to mismatch

Cued Beams [Yudichak, et al, 2007] Need to account for sensitivity of adaptive beamforming (ABF) Steer (adaptive) beams more densely in areas where the prior probability density function (PDF) is large Cued beams are steered within a certain number of standard deviations from the mean of a Gaussian prior PDF Use the beamformer output as a likelihood function Use Bayes’ rule to generate a posterior PDF Improvements: Need to fully cover bearing The use of the beamformer output as a likelihood function is ad hoc

Bayesian Beamformer [Bell, et al, 2000] Also assumes a priori PDF Beamformer is a linear combination of adaptive MVDR beamformers weighted by the posterior probability density function, p(θ|X) Computationally efficient, O(MVDR) The likelihood function they derive assumes Gaussian random processes and is therefore less ad hoc then using the beamformer output Difficult to extend their likelihood function to other classes of beamformers

Robust Capon Beamformer [Li, et al, 2003] A natural extension of the Capon beamformer Directly addresses steering vector uncertainty by assuming an ellipsoidal uncertainty set: minimize a*R-1a subject to (a-a0)*C-1 (a-a0) ≤ 1 Computationally efficient, O(MVDR) When used with cued beams its use could guarantee that bearing is fully covered

Questions?