1 Basic statistical concepts and least-squares. Sat05_61.ppt, 2005-11-28. 1.Statistical concepts 2.Distributions Normal-distribution 2. Linearizing 3.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

5.4 Basis And Dimension.
Mutidimensional Data Analysis Growth of big databases requires important data processing.  Need for having methods allowing to extract this information.
Dimension reduction (1)
Simple Linear Regression and Correlation
Observers and Kalman Filters
By : L. Pour Mohammad Bagher Author : Vladimir N. Vapnik
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
1 Parameter-estimation. Sat05_62.ppt,
Deterministic Solutions Geostatistical Solutions
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
Statistics for Business and Economics
L15:Microarray analysis (Classification). The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
ARCGICE WP 1.4 ERROR ESTIMATES IN SPATIAL AND SPECTRAL DOMAINS C.C.Tscherning, University of Copenhagen,
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Continuous Random Variables and Probability Distributions
Chapter 5 Continuous Random Variables and Probability Distributions
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Applications in GIS (Kriging Interpolation)
Simple Linear Regression and Correlation
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Principles of Least Squares
Separate multivariate observations
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Chapter 12 Multiple Linear Regression Doing it with more variables! More is better. Chapter 12A.
© 2001 Prentice-Hall, Inc. Statistics for Business and Economics Simple Linear Regression Chapter 10.
Modern Navigation Thomas Herring
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
C.C.Tscherning, University of Copenhagen, Denmark. Developments in the implementation and use of Least-Squares Collocation. IAG Scientific Assembly, Potsdam,
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Spatial Analysis & Geostatistics Methods of Interpolation Linear interpolation using an equation to compute z at any point on a triangle.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
CHAPTER 5 SIGNAL SPACE ANALYSIS
General ideas to communicate Dynamic model Noise Propagation of uncertainty Covariance matrices Correlations and dependencs.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Probability and statistics review ASEN 5070 LECTURE.
Matrix Notation for Representing Vectors
Geology 6600/7600 Signal Analysis 04 Nov 2015 © A.R. Lowry 2015 Last time(s): Discussed Becker et al. (in press):  Wavelength-dependent squared correlation.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Geology 5670/6670 Inverse Theory 28 Jan 2015 © A.R. Lowry 2015 Read for Fri 30 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares: Uncertainty The.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Pattern Recognition Mathematic Review Hamid R. Rabiee Jafar Muhammadi Ali Jalali.
Solving Multi-Step Equations INTEGRATED MATHEMATICS.
Computacion Inteligente Least-Square Methods for System Identification.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 19: Examples with the Batch Processor.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
Introduction to Vectors and Matrices
Biointelligence Laboratory, Seoul National University
Chapter 11: Simple Linear Regression
CHAPTER- 17 CORRELATION AND REGRESSION
Handout Ch 4 實習.
Handout Ch 4 實習.
Introduction to Vectors and Matrices
Presentation transcript:

1 Basic statistical concepts and least-squares. Sat05_61.ppt, Statistical concepts 2.Distributions Normal-distribution 2. Linearizing 3. Least-squares The overdetermined problem The underdetermined problem

2 Histogram. Describes distribution of repeated observations At different times or places ! Distribution of global 1 0 mean gravity anomalies

3 Statistic: Distributions describes not only random events ! We use statistiscal description for ”deterministic” quantities as well as on random quantities. Deterministic quantities may look as if they have a normal distribution !

4 ”Event” Basic concept: Measured distance, temperature, gravity, … Mapping: Stocastisc variabel. In mathematics: functional, H – function-space – maybe Hilbertspace. Gravity acceleration in point P: mapping of the space of all possible gravity-potentials to the real axis.

5 Probability-density, f(x): What is the probability P that the value is in a specific interval:

6 Mean and variance, Estimation-operator E:

7 Variance-covariance in space of several dimensions: Mean value and variances:

8 Correlation and covariance-propagation: Correlation between two quantities: = 0: independent. Due to linearity

9 Mean-value and variance of vector If X and A 0 are vectors of dimension n and A is an n x m matrix, then The inverse P = generally denoted the weight- matrix

10 Distribution of the sum of 2 numbers: Exampel Here n = 2 and m = 1. We regard the sum of 2 observations: What is the variance, if we regard the difference between two observations ?

11 Normal-distribution 1-dimensional quantity has a normal distribution if Vektor of simultaneously normal-distributed quantities if n-dimensional normal distribution.

12 C ovarians-propagation in several dimensions: X: n-dimensional, normally distributed, D nxm matrix, then Z=DZ also normal distributed, E(Z)=D E(X)

13 Estimate of mean, variance etc

14 Covarians function If the covariance COV(x,y) is a function of x,y then we have a Covarians-function May be a function of Time-difference (stationary) Spherical Distance, ψ on the unit-sphere (isotrope)

15 Normally distributed data and resultat. If data are normaly dsitributed, then the resultats are also normaly distributed If they are linearily related ! We must linearize – TAYLOR-Development with only 0 and 1. order terms. Advantage: we may interprete error-distributions.

16 Distributions in infinite-dimensional spaces V( P) element in separable Hilbert-space: Normal distributed with sum of variances finite !

17 Stochastic process. What is the probability P for the event is located in a specific interval Exampel: What is the probability that gravity in Buddinge lies in between -20 and 20 mgal and that gravity in Rockefeller lies in the same interval

18 Stokastisc process in Hilbertspace What is the mean value and variance of ”the Evaluation- functional”,

19 Covariance function of stationary time-series. Covariance-function depends only on |x-y| Variances called ”Power-spectrum”.

20 Covariance function – gravity-potential. Suppose X ij normal-distributed with the same variance for constant ”i”.

21 Isotropic Covariance-function for Gravity potential

22 Linearizering: why ? We want to find best estimate (X) for m quantities from n observations (L). Data normal-distributed, implies result normaly distributed, if there is a linear relationship. If m > n there exist an optimal metode for estimating X: Metode of Least-Squares

23 Linearizing – Taylor-development. If non-linear: Start-værdi (skøn) for X kaldes X 1 Taylor-development with 0 og 1. order terms after changing the order

24 Covariance-matrix for linearizered quantities If measurements independently normal distributed with varians-covariance Then the resultatet y normal-dsitributed with variance- covarians:

25 Linearizing the distance-equation. Linearized based on coordinates

26 On Matrix form: If 3 equations with 3 un-knowns !

27 Numerical-example If (X 11, X 12,X 13 ) = ( m, m, m). Satellite: ( , , ) Computed distance: m Measured distance: m (( )dX 1 + ( ) dX 2 +( ) dX 3 )/ = ( ) or: dX dX dX 3 = -2.7

28 Linearizing in Physical Geodesy based on T=W-U In function-spaces the Normal-potential may be regarded as a 0-order term in a Taylor- development.We may differentiate in Metric space (Frechet-derivative).

29 Method of Least-Square. Over-determined problem. More observations than parameters or quantities which must be estimated: Examples: GPS-observations, where we stay at the same place (static) We want coordinates of one or more points. Now we suppose that the unknowns are m linearily independent quantities !

30 Least-squares = Adjustment. Observation-equations: We want a solution so that Differentiation:

31 Metod of Last-Squares. Variance-covariance.

32 Metod of Least-Squares. Linear problem. Gravity observations: H, g= /-0.02 mgal G I / / /-0.03

33 Observations-equations..

34 Method of Least-Squares. Over-determined problem. Compute the varianc-covariance-matrix

35 Method of Least-Squares. Optimal if observations are normaly distributed + Linear relationship ! Works anyway if they are not normally distributed ! And the linear relationship may be improved using iteration. Last resultat used as a new Taylor-point. Exampel: A GPS receiver at start.

36 Metode of Least-Squares. Under-determined problem. We have fewer observations than parameters: gravity- field, magnetic field, global temperature or pressure distribution. We chose a finite dimensional sub-space, dimension equal to or smaller than number of observations. Two possibilities (may be combined): We want ”smoothest solution” = minimum norm We want solution, which agree as best as possible with data, considering the noise in the data

37 Method of Least-Squares. Under-determined problem. Initially we look for finite-dimensional space so the solution in a variable point P i becomes a linear- combination of the observations y j : If stocastisk process, we want the ”interpolation-error” minimalized

38 Method of Least-Squares. Under-determined problem. Covariances: Using differentiation: Error-variance:

39 Method of Least-Squares. Gravity-prediction. Example: Covarianses: COV(0 km)= 100 mgal 2 COV(10 km)= 60 mgal 2 COV(8 km)= 80 mgal 2 COV(4 km)= 90 mgal 2 P 6 mgal Q 10 mgal R 10 km 8 km 4 km

40 Method of Least-Squares. Gravity prediction. Continued: Compute the error-estimate for the anomaly in R.

41 Least-Squares Collocation. Also called: optimal linear estimation For gravity field: name has origin from solution of differential-equations, where initial values are maintained. Functional-analytic version by Krarup (1969) Kriging, where variogram is used closely connected to collocation.

42 Least-Squares collocation. We need covariances – but we only have one Earth. Rotate Earth around gravity centre and we get (conceptually) a new Earth. Covariance-function supposed only to be dependent on spherical distance and distance from centre. For each distance-interval one finds pair of points, of which the product of the associated observations is formed and accumulated. The covariance is the mean value of the product-sum.

43 Covarians-function for gravity anomalies, r=R.

44 Covarians function for gravity-anomalies: Different models for degree-variances (Power-spectrum): Kaula, 1959, (but gravity get infinite variance) Tscherning & Rapp, 1974 (variance finite).