Chapter 6 BEST Linear Unbiased Estimator (BLUE)

Slides:



Advertisements
Similar presentations
Variational data assimilation and forecast error statistics
Advertisements

Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Ordinary Least-Squares
1 SAMPLE COVARIANCE BASED PARAMETER ESTIMATION FOR DIGITAL COMMUNICATIONS Javier Villares Piera Advisor: Gregori Vázquez Grau Signal Processing for Communications.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
Computer Networks Group Universität Paderborn Ad hoc and Sensor Networks Chapter 9: Localization & positioning Holger Karl.
The General Linear Model. The Simple Linear Model Linear Regression.
Visual Recognition Tutorial
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Location Estimation in Sensor Networks Moshe Mishali.
Independent Component Analysis (ICA) and Factor Analysis (FA)
APPROXIMATE EXPRESSIONS FOR THE MEAN AND COVARIANCE OF THE ML ESTIMATIOR FOR ACOUSTIC SOURCE LOCALIZATION Vikas C. Raykar | Ramani Duraiswami Perceptual.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Lecture II-2: Probability Review
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Week 21 Stochastic Process - Introduction Stochastic processes are processes that proceed randomly in time. Rather than consider fixed random variables.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
A Semi-Blind Technique for MIMO Channel Matrix Estimation Aditya Jagannatham and Bhaskar D. Rao The proposed algorithm performs well compared to its training.
Dept. E.E./ESAT-STADIUS, KU Leuven
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
1 On the Channel Capacity of Wireless Fading Channels C. D. Charalambous and S. Z. Denic School of Information Technology and Engineering, University of.
Doc.: a Submission September 2004 Z. Sahinoglu, Mitsubishi Electric research LabsSlide 1 A Hybrid TOA/RSS Based Location Estimation Zafer.
EE 551/451, Fall, 2006 Communication Systems Zhu Han Department of Electrical and Computer Engineering Class 15 Oct. 10 th, 2006.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
CWR 6536 Stochastic Subsurface Hydrology
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 10: Batch.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Geology 5670/6670 Inverse Theory 12 Jan 2015 © A.R. Lowry 2015 Read for Wed 14 Jan: Menke Ch 2 (15-37) Last time: Course Introduction (Cont’d) Goal is.
Computacion Inteligente Least-Square Methods for System Identification.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
Kriging - Introduction Method invented in the 1950s by South African geologist Daniel Krige (1919-) for predicting distribution of minerals. Became very.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Stochastic Process - Introduction
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
12. Principles of Parameter Estimation
PSG College of Technology
Chapter 2 Minimum Variance Unbiased estimation
Statistical Methods For Engineers
The regression model in matrix form
'Linear Hierarchical Models'
Error statistics in data assimilation
Parametric Methods Berlin Chen, 2005 References:
12. Principles of Parameter Estimation
Kalman Filter: Bayes Interpretation
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Regression Models - Introduction
Uncertainty Propagation
Presentation transcript:

Chapter 6 BEST Linear Unbiased Estimator (BLUE) Shin won jae

Contents 1. Introduction 2. Best Linear Unbiased Estimate Definition Finding in scalar case Finding BLUE for Scalar Linear Observation Applicability of BLUE Vector Parameter Case 3. TOA Measurement Model Linearization of TOA Model TOA vs. TDOA Conversion to TDOA Model Apply BLUE to TDOA linearized Model Apply TDOA Result to Simple Geometry

Introduction Motivation for BLUE Idea for BLUE Except for Linear Model case, the optimal MVU estimator might: 1. not even exist 2. be difficult or impossible to find Resort to a sub-optimal estimate BLUE is one such sub-optimal estimate Idea for BLUE 1. Restrict estimate to be linear in data x 2. Restrict estimate to be unbiased 3. Find the best one (i.e. with minimum variance) Advantage of BLUE: Needs only 1st and 2nd moments of PDF Disadvantage of BLUE: 1. Sub-optimal 2. Sometimes totally inappropriate

Definition of BLUE (scalar case) Observed data: PDF: depends on unknown BLUE constrained to be linear in data: Choose a’s to give: 1. unbiased estimator 2. then minimize variance

Finding the BLUE (Scalar Case) 1. Constrain to be linear: 2. Constrain to be Unbiased: Q: When can we meet both of these constraints? A: Only for certain observation models (e.g., linear observations)

Finding the BLUE for Scalar Linear Observation Consider scalar-parameter Linear observation: For the unbiased condition we need: Now … given that these constraints are met… We need to minimize the variance Given that C is the covariance matrix of x we have:

Finding the BLUE for Scalar Linear Observation Goal: minimize subject to Constrained optimization Appendix 6A: Use Lagrangian Multipliers Minimize Set:

Applicability of BLUE We just derived the BLUE under the following: 1. Linear observations but with no constraint on the noise PDF 2. No knowledge of the noise PDF other than its mean and cov What does this tell us???? BLUE is applicable to Linear observations But... Noise need not be Gaussian!!! (as was assumed in CH.4 Linear Model) And all we need are the 1st and 2nd moment of the PDF!!! But... We’ll see in the Example that we can often linearize a nonlinear model!!!

Vector Parameter Case: Gauss-Markov Theorem If data can be modeled as having linear observations in noise: Then the BLUE is : and its covariance is: Known Mean & Cov (PDF is otherwise arbitrary & inknown) Known Matrix Note: If noise is Gaussian then BLUE is MVUE

Ex. 6.3 TDOA-Based Emitter Location Assume that the i-th RX can measure its TOA: Then… from the set of TOAs… compute TDOAs Then… from the set of TDOAs… estimate location We won’t worry about “how” they do that. Also... there are TDOA systems that never actually estimate TOAs!

TOA Measurement Model Assume measurements of TOAs at N receivers (only 3 shown above): TOA measurement model: : Time the signal emitted : Range from Tx to : Speed of Propagation (for EM: ) There are measurement errors Measurement Noise : zero-mean, variance , independent (bot PDF unknown) (variance determined from estimator used to estimate Nonlinear Model

Linearization of TOA Model So… we linearize the model so we can apply BLUE: Assume some rough estimate is available Now use truncated Taylor series to linearize know estimate know estimate Three unknown parameters to estimate:

TOA Model vs. TDOA Model Two options now: 1. Use TOA to estimate 3 parameters: 2. Use TDOA to estimate 2 parameters: Generally the fewer parameters the better… everything else being the same. But… here “everything else” is not the same: Option 1&2 have different noise models (Option 1 has independent noise) (Option 2has correlated noise) In practice… we’d explore both options and see which is best

Conversion to TDOA Model N-1 TDOA rather than N TOAs

Apply Blue to TDOA Linearized Model Things we can now do: 1. Explore estimation error cov for different Tx/Rx geometries Plot error ellopses 2. Analytically explore simple geometries to find trends See next chart (more details in book)

Apply TDOA Result to Simple Geometry

Apply TDOA Result to Simple Geometry