12.215 Modern Navigation Thomas Herring

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Random Variables ECE460 Spring, 2012.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
DEPARTMENT OF HEALTH SCIENCE AND TECHNOLOGY STOCHASTIC SIGNALS AND PROCESSES Lecture 1 WELCOME.
Point estimation, interval estimation
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Prof. Bart Selman Module Probability --- Part d)
QA-2 FRM-GARP Sep-2001 Zvi Wiener Quantitative Analysis 2.
FRM Zvi Wiener Following P. Jorion, Financial Risk Manager Handbook Financial Risk Management.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
1 ECE310 – Lecture 23 Random Signal Analysis 04/27/01.
Random Variable and Probability Distribution
Lecture II-2: Probability Review
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Separate multivariate observations
Review of Probability.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Prof. SankarReview of Random Process1 Probability Sample Space (S) –Collection of all possible outcomes of a random experiment Sample Point –Each outcome.
: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Chapter 5 Sampling and Statistics Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Machine Learning Queens College Lecture 3: Probability and Statistics.
MathematicalMarketing Slide 2.1 Descriptive Statistics Chapter 2: Descriptive Statistics We will be comparing the univariate and matrix formulae for common.
Short Resume of Statistical Terms Fall 2013 By Yaohang Li, Ph.D.
Modern Navigation Thomas Herring
1 G Lect 4a G Lecture 4a f(X) of special interest: Normal Distribution Are These Random Variables Normally Distributed? Probability Statements.
Probability & Statistics I IE 254 Summer 1999 Chapter 4  Continuous Random Variables  What is the difference between a discrete & a continuous R.V.?
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
Risk Analysis & Modelling Lecture 2: Measuring Risk.
Lecture V Probability theory. Lecture questions Classical definition of probability Frequency probability Discrete variable and probability distribution.
General ideas to communicate Dynamic model Noise Propagation of uncertainty Covariance matrices Correlations and dependencs.
CpSc 881: Machine Learning Evaluating Hypotheses.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room ;
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
Probability Theory Modelling random phenomena. Permutations the number of ways that you can order n objects is: n! = n(n-1)(n-2)(n-3)…(3)(2)(1) Definition:
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
Joint Moments and Joint Characteristic Functions.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Review of statistical modeling and probability theory Alan Moses ML4bio.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Pattern Recognition Mathematic Review Hamid R. Rabiee Jafar Muhammadi Ali Jalali.
Geology 6600/7600 Signal Analysis 04 Sep 2014 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting.
Biostatistics Class 3 Probability Distributions 2/15/2000.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
MECH 373 Instrumentation and Measurements
Probability Theory and Parameter Estimation I
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
CH 5: Multivariate Methods
Graduate School of Information Sciences, Tohoku University
The distribution function F(x)
数据的矩阵描述.
Principles of the Global Positioning System Lecture 13
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Parametric Methods Berlin Chen, 2005 References:
Chapter 2. Random Variables
Presentation transcript:

Modern Navigation Thomas Herring

10/21/ Modern Naviation L122 Review of last class Map Projections: –Why projections are needed –Types of map projections Classification by type of projection Classification by characteristics of projection –Mathematics of map projections

10/21/ Modern Naviation L123 Today’s Class Basic Statistics –Statistical description and parameters Probability distributions Descriptions: expectations, variances, moments Covariances Estimates of statistical parameters Propagation of variances –Methods for determining the statistical parameters of quantities derived other statistical variables

10/21/ Modern Naviation L124 Basic Statistics Concept behind statistical description: Some processes involve so many small physical effects that deterministic model is not possible. For Example: Given a die, and knowing its original orientation before throwing; the lateral and rotational forces during throwing and the characteristics of the surface it falls on, it should be possible to calculate its orientation at the end of the throw. In practice any small deviations in forces and interactions makes the result unpredictable and so we find that each face comes up 1/6th of the time. A probabilistic description. In this case, any small imperfections in the die can make one face come up more often so that each probability is no longer 1/6th (but all outcomes must still add to a probability of 1)

10/21/ Modern Naviation L125 Probability descriptions For discrete processes we can assign probabilities of specific events occurring but for continuous random variables, this is not possible For continuous random variables, the most common description is a probability density function. A probability density function gives the probability that a random variable x will have a value between x and x+dx To find the probability of a random variable taking on a value between x 1 and x 2, the density function is integrated between these two values. Probability density functions can derived analytically for variables that are derived from other random variables with know probability density functions, or if the number of samples is large, then a histogram can be used to determine the density function (normally by fitting to a know class of density functions).

10/21/ Modern Naviation L126 Example of random variables 

10/21/ Modern Naviation L127 Histograms of random variables

10/21/ Modern Naviation L128 Histograms with more samples

10/21/ Modern Naviation L129 Characterization Random Variables When the probability distribution is known, the following statistical descriptions are used for random variable x with density function f(x): Square root of variance is called standard deviation

10/21/ Modern Naviation L1210 Theorems for expectations For linear operations, the following theorems are used: –For a constant = c –Linear operator = c –Summation = + Covariance: The relationship between random variables f xy (x,y) is joint probability distribution:

10/21/ Modern Naviation L1211 Estimation on moments Expectation and variance are the first and second moments of a probability distribution As N goes to infinity these expressions approach their expectations. (Note the N-1 in form which uses mean)

10/21/ Modern Naviation L1212 Probability distributions While there are many probability distributions there are only a couple that are common used:

10/21/ Modern Naviation L1213 Probability distributions The chi-squared distribution is the sum of the squares of r Gaussian random variables with expectation 0 and variance 1. With the probability density function known, the probability of events occurring can be determined. For Gaussian distribution in 1-D; P(|x|<1  ) = 0.68; P(|x|<2  ) = 0.955; P(|x|<3  ) = Conceptually, people think of standard deviations in terms of probability of events occurring (ie. 68% of values should be within 1-sigma).

10/21/ Modern Naviation L1214 Central Limit Theorem Why is Gaussian distribution so common? “The distribution of the sum of a large number of independent, identically distributed random variables is approximately Gaussian” When the random errors in measurements are made up of many small contributing random errors, their sum will be Gaussian. Any linear operation on Gaussian distribution will generate another Gaussian. Not the case for other distributions which are derived by convolving the two density functions.

10/21/ Modern Naviation L1215 Covariance matrices For large systems of random variables (such as GPS range measurements, position estimates etc.), the variances and covariances are arranged in a matrix called the variance- covariance matrix (or simply covariance matrix). This is the matrix used in multivariate Gaussian probability density function.

10/21/ Modern Naviation L1216 Properties of covariance matrices Covariance matrices are symmetric All of the diagonal elements are positive and usually non-zero since these are variances The off diagonal elements need to satisfy at least that -1<=  ij /(  i  j )<=1 where  i and  j are the standard deviations (square root of diagonal elements) The matrix has all positive (or zero) eigenvalues Matrix is positive definite (i.e., for all vectors x, x T Vx is positive)

10/21/ Modern Naviation L1217 Propagation of Variance-Covariance matrices Given the covariance matrix of a set of random variables, the characteristics of expected values can be used to determine the covariance matrix of any linear combination of the measurements.

10/21/ Modern Naviation L1218 Applications Propagation of variances is used extensively to “predict” the statistical character of quantities derived by linear operations on random variables Because of the Central Limit theorem we know that if the random variables used in the linear relationships are Gaussianly distributed, then the resultant random variables will also be Gaussianly distributed. Example:

10/21/ Modern Naviation L1219 Application Notice here that depending on  12 (sign and magnitude), the sum or difference of two random variables can either be very well determined (small variance) or poorly determined (large variance) If  12 is zero (uncorrelated in general and independent for Gaussian random variables (all moments are zero), then the sum and difference have the same variance. Even if the covariance is not known, then an upper limit can be placed on the variance of the sum or difference (since the value of  12 is bounded). We return to covariance matrices after covering estimation.

10/21/ Modern Naviation L1220 Summary Basic Statistics –Statistical description and parameters Probability distributions Descriptions: expectations, variances, moments Covariances Estimates of statistical parameters Propagation of variances –Methods for determining the statistical parameters of quantities derived other statistical variables