12.540 Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room 54-820A; 253-5941

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room A;
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Experimental Design, Response Surface Analysis, and Optimization
Visual Recognition Tutorial
Maximum likelihood (ML) and likelihood ratio (LR) test
Descriptive statistics Experiment  Data  Sample Statistics Sample mean Sample variance Normalize sample variance by N-1 Standard deviation goes as square-root.
Point estimation, interval estimation
Maximum likelihood (ML)
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Maximum likelihood (ML)
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Principles of Least Squares
Review of Probability.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
Random Sampling, Point Estimation and Maximum Likelihood.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
1 LES of Turbulent Flows: Lecture 1 Supplement (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Fall 2014.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
Lab 3b: Distribution of the mean
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room ;
Estimation Method of Moments (MM) Methods of Moment estimation is a general method where equations for estimating parameters are found by equating population.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
Principles of the Global Positioning System Lecture 09 Prof. Thomas Herring Room A;
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Computacion Inteligente Least-Square Methods for System Identification.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Statistics and probability Dr. Khaled Ismael Almghari Phone No:
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
The Maximum Likelihood Method
Physics 114: Lecture 13 Probability Tests & Linear Fitting
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Probability Theory and Parameter Estimation I
Ch3: Model Building through Regression
The Maximum Likelihood Method
The Maximum Likelihood Method
Statistical Methods For Engineers
Introduction to Instrumentation Engineering
Filtering and State Estimation: Basic Concepts
Principles of the Global Positioning System Lecture 11
Principles of the Global Positioning System Lecture 13
Parametric Methods Berlin Chen, 2005 References:
Presentation transcript:

Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;

03/11/ Lec 102 Estimation: Introduction Quick review of Homework #1 Class paper title and outline will due Monday April 1, Overview –Basic concepts in estimation –Models: Mathematical and Statistical –Statistical concepts

03/11/ Lec 103 Basic concepts Basic problem: We measure range and phase data that are related to the positions of the ground receiver, satellites and other quantities. How do we determine the “best” position for the receiver and other quantities. What do we mean by “best” estimate? Inferring parameters from measurements is estimation

03/11/ Lec 104 Basic estimation Two styles of estimation (appropriate for geodetic type measurements) –Parametric estimation where the quantities to be estimated are the unknown variables in equations that express the observables –Condition estimation where conditions can be formulated among the observations. Rarely used, most common application is leveling where the sum of the height differences around closed circuits must be zero

03/11/ Lec 105 Basics of parametric estimation All parametric estimation methods can be broken into a few main steps: –Observation equations: equations that relate the parameters to be estimated to the observed quantities (observables). Mathematical model. Example: Relationship between pseudorange, receiver position, satellite position (implicit in  ), clocks, atmospheric and ionosphere delays –Stochastic model: Statistical description that describes the random fluctuations in the measurements and maybe the parameters –Inversion that determines the parameters values from the mathematical model consistent with the statistical model.

03/11/ Lec 106 Observation model Observation model are equations relating observables to parameters of model: –Observable = function (parameters) –Observables should not appear on right-hand-side of equation Often function is non-linear and most common method is linearization of function using Taylor series expansion. Sometimes log linearization for f=a.b.c ie. Products of parameters

03/11/ Lec 107 Taylor series expansion In most common Taylor series approach: The estimation is made using the difference between the observations and the expected values based on apriori values for the parameters. The estimation returns adjustments to apriori parameter values

03/11/ Lec 108 Linearization Since the linearization is only an approximation, the estimation should be iterated until the adjustments to the parameter values are zero. For GPS estimation: Convergence rate is :1 typically (ie., a 1 meter error in apriori coordinates could results in 1-10 mm of non- linearity error).

03/11/ Lec 109 Estimation (Will return to statistical model shortly) Most common estimation method is “least-squares” in which the parameter estimates are the values that minimize the sum of the squares of the differences between the observations and modeled values based on parameter estimates. For linear estimation problems, direct matrix formulation for solution For non-linear problems: Linearization or search technique where parameter space is searched for minimum value Care with search methods that local minimum is not found (will not treat in this course)

03/11/ Lec 1010 Least squares estimation Originally formulated by Gauss. Basic equations:  y is vector of observations; A is linear matrix relating parameters to observables;  x is vector of parameters; v is residual

03/11/ Lec 1011 Weighted Least Squares In standard least squares, nothing is assumed about the residuals v except that they are zero mean. One often sees weight-least-squares in which a weight matrix is assigned to the residuals. Residuals with larger elements in W are given more weight.

03/11/ Lec 1012 Statistical approach to least squares If the weight matrix used in weighted least squares is the inverse of the covariance matrix of the residuals, then weighted least squares is a maximum likelihood estimator for Gaussian distributed random errors. This latter form of least-squares is most statistically rigorous version. Sometimes weights are chosen empirically

03/11/ Lec 1013 Review of statistics Random errors in measurements are expressed with probability density functions that give the probability of values falling between x and x+dx. Integrating the probability density function gives the probability of value falling within a finite interval Given a large enough sample of the random variable, the density function can be deduced from a histogram of residuals.

03/11/ Lec 1014 Example of random variables Code for these plots is histograms.mhistograms.m

03/11/ Lec 1015 Histograms of random variables

03/11/ Lec 1016 Histograms with more samples

03/11/ Lec 1017 Characterization Random Variables When the probability distribution is known, the following statistical descriptions are used for random variable x with density function f(x): Square root of variance is called standard deviation

03/11/ Lec 1018 Theorems for expectations For linear operations, the following theorems are used: –For a constant = c –Linear operator = c –Summation = + Covariance: The relationship between random variables f xy (x,y) is joint probability distribution:

03/11/ Lec 1019 Estimation on moments Expectation and variance are the first and second moments of a probability distribution As N goes to infinity these expressions approach their expectations. (Note the N-1 in form which uses mean)

03/11/ Lec 1020 Probability distributions While there are many probability distributions there are only a couple that are common used:

03/11/ Lec 1021 Probability distributions The chi-squared distribution is the sum of the squares of r Gaussian random variables with expectation 0 and variance 1. With the probability density function known, the probability of events occurring can be determined. For Gaussian distribution in 1-D; P(|x|<1  ) = 0.68; P(|x|<2  ) = 0.955; P(|x|<3  ) = Conceptually, people thing of standard deviations in terms of probability of events occurring (ie. 68% of values should be within 1-sigma).

03/11/ Lec 1022 Central Limit Theorem Why is Gaussian distribution so common? “The distribution of the sum of a large number of independent, identically distributed random variables is approximately Gaussian” When the random errors in measurements are made up of many small contributing random errors, their sum will be Gaussian. Any linear operation on Gaussian distribution will generate another Gaussian. Not the case for other distributions which are derived by convolving the two density functions.

03/11/ Lec 1023 Summary Examined simple least squares and weighted least squares Examined probability distributions Next we pose estimation in a statistical frame work Lots of book and web articles discuss least squares. One web resources for reading;