0 Optimal solutions to geodetic inverse problems in statistical and numerical aspects Jianqing Cai Geodätisches Institut, Universität Stuttgart Session.

Slides:



Advertisements
Similar presentations
Advanced topics in Financial Econometrics Bas Werker Tilburg University, SAMSI fellow.
Advertisements

Ordinary Least-Squares
5.1 Real Vector Spaces.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Prediction with Regression
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chapter 7. Statistical Estimation and Sampling Distributions
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
On the alternative approaches to ITRF formulation. A theoretical comparison. Department of Geodesy and Surveying Aristotle University of Thessaloniki Athanasios.
Ch11 Curve Fitting Dr. Deshi Ye
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
The General Linear Model. The Simple Linear Model Linear Regression.
The Simple Linear Regression Model: Specification and Estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 10 Simple Regression.
2.5 Variances of the OLS Estimators
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Maximum likelihood (ML)
Chapter 4 Multiple Regression.
Maximum likelihood (ML) and likelihood ratio (LR) test
Multiple Regression Models
Prediction and model selection
Review.
Linear and generalised linear models
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Linear and generalised linear models
Basics of regression analysis
Maximum likelihood (ML)
Chapter 10 Real Inner Products and Least-Square (cont.)
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Random Sampling, Point Estimation and Maximum Likelihood.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
© 2001 Prentice-Hall, Inc. Statistics for Business and Economics Simple Linear Regression Chapter 10.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Y X 0 X and Y are not perfectly correlated. However, there is on average a positive relationship between Y and X X1X1 X2X2.
Univariate Linear Regression Problem Model: Y=  0 +  1 X+  Test: H 0 : β 1 =0. Alternative: H 1 : β 1 >0. The distribution of Y is normal under both.
Section 2.3 Properties of Solution Sets
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Ridge Regression: Biased Estimation for Nonorthogonal Problems by A.E. Hoerl and R.W. Kennard Regression Shrinkage and Selection via the Lasso by Robert.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Computacion Inteligente Least-Square Methods for System Identification.
STATISTICS People sometimes use statistics to describe the results of an experiment or an investigation. This process is referred to as data analysis or.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
12. Principles of Parameter Estimation
Ch3: Model Building through Regression
Simultaneous equation system
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Chapter 2 Minimum Variance Unbiased estimation
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Chapter 11 Variable Selection Procedures
12. Principles of Parameter Estimation
16. Mean Square Estimation
Applied Statistics and Probability for Engineers
Presentation transcript:

0 Optimal solutions to geodetic inverse problems in statistical and numerical aspects Jianqing Cai Geodätisches Institut, Universität Stuttgart Session 6: Theoretische Geodäsie Geodätische Woche –07. Oktober 2010, Messe Köln Geodätisches Institut – Universität Stuttgart

1 Main Topics 1. Geodetic data analysis 2. Fixed effects (G-M, Ridge, BLE,  - weighted homBLE) 3. Regularizations 4. Mixed model (Combination method) 5. Conclusions and further studies

Geodetic data analysis Numerical aspect: Inverse and ill-posed problems Applied mathematicians often are more interesting in existence, uniqueness, and construction, given an inifinite number of noise-free data, and stability given data contaminated by a deterministic disturbance. Approach: Tykhonov-Phillips regularization and numerical methods such as L-Curve (Hansen 1992) or the C p -Plot (Mallows 1973). Statistical aspect: Estimation and inference the data are modeled as stochastic Standard statistical concepts, questions, and considerations such as bias, variance, mean-square error, identifiability, consistency, efficiency and various forms of optimality can be applied ill-posed problems is related to the rank defect, etc. Biased and unbiased estimations Many diffenent developed independently in separate disciplines. Some clear differences in terminology, philosophy and numerical implementation remain, due to tradition and lack of interdisciplinary communication.

2. The special linear Gauss-Markov model Special Gauss Markov model 1st moments 2nd moments

4 4 4 Best Linear Estimation (BLE) (R. Rao, 1972) It is not a suitable criterion for minimizing, since it involves both the unknowns. For the GGM model (y, Aξ, Σ y = σ 2 P -1 ), and let L′y be an estimator of ξ; The Mean Square Error (MSE) of L′y is : Three possibilities: If ξ is considered as a random variable with a priori mean dispersion E(ξξ′)= σ 2 V; Choose an a priori value of σ -1 ξ, say γ and substitute σ 2 V =σ 2 γγ′ for ξξ′; be consist of two parts, variance and bias. The choise of V in F represents the relative weight, which is n.n.d and of rank greater than one. yields the BLE of ξ

5 The open problem to evaluate the regularization parameter Ever since Tykhonov (1963) and Phillips (1962) introduced the hybrid minimum norm approximation solution (HAPS) of a linear improperly posed problem there has been left the open problem to evaluate the regularization factor λ; In most applications of Tykhonov-Phillips type of regularization the weighting factor λ is determined by heurishical methods, such as by means of L-Curve (Hansen 1992) or the C p -Plot (Mallows 1973). In literature also optimization techniques have been applied. α-weighted hybrid minimum variance- minimum bias estimation (hom α-BLE)

6  -weighted S-homBLE and A-optimal design of the regularization parameter λ According to Grafarend and Schaffrin (1993), updated by Cai (2004), a homogeneously linear α-weighted hybrid minimum variance- minimum bias estimation ( α, S-homBLE) is based upon the weighted sum of two norms of type: namely

7 Theorem 1 α,S-homBLE, also called: ridge estimator Linear Gauss- Markov model : α,S-homBLE: dispersion matrix : bias vector: Mean Square Error matrix:

8 Figure 1. The relationship between the variance, the squared bias and the weighting factor α. The variacne term decrease as α increases, while the squared bias increase with α.

9qa The geodetic inverse Problem: Exact or strict Multicollinearity means weak Multicollinearity means Use the condition Number for diagnostics: The weight factor α can be alternatively determined by the A-optimal design of type Here we focus on the third case – the most meaningful one – "minimize the trace of the Mean Square Error matrix

10 Theorem 2. A-optimal design of  then follows by A-optimal design in the sense of Let the average hybrid  -weighted variance-bias norm of ( α, S-homBLE) with respect to the linear Gauss-Markov model be given by

11 Tykhonov-Phillips regularization: yields the normal equations system Tykhonov-Phillips regularization is defined as the solution to the problem Ridge regression (Hoerl and Kennard, 1970a, b). yields the normal equations system Ridge regression is defined as biased estimation for nonorthogonal problems with a Lagrangian function: or a equivalent statement:

12 Generalized Tykhonov-Phillips regularization: What is the solution of generalized regularization ? Tykhonov-Phillips regularization is defined as the solution to the problem Rewrite the objective function yields the right solution

13 Comparison of the determination of the regularization factor λ by A-optimal design and the ridge parameter k in ridge regression Hoerl, Kennard and Baldwin (1975) have suggested that if A′A=I m, then a minimum meansquare error (MSE) is obtained if ridge parameter for multiple linear regression model: This is just the special case of our general solution by A-optimal design of Corollary 3 under unit weight P and A’A=I m, yielding

14 3. Mixed model (Combination method) Theil & Goldberger (1961) and Theil (1963), Toutenburg (1982) and Rao & Toutenburg (1999) mixed estimator with additional information as stochastic linear restrictions: Linear Gauss-Markov model: The additional information: Mixed model: The BLUUE estimator of the original G-M model: The BLUUE estimator of the mixed model: Stochastic linear restriction: The dispersion matrix: The use of stochastic restrictions leads to a gain in efficiency.

15 The light constraint solutions (Reigber, 1989) Light constraint model : BLUUE estimator of light constraint model : Light constraint with a priori information With the dispersion matrix: The objective function of light constraint solution: Difference between estimators of the mixed models and light constraint models: The same as the objective function of the estimate with weighting parameters !

16 Combination and Regularization methods In order to solve the PGP by spectral domain stabilization, we use a priori information in terms of spherical harmonic coefficients. Augmenting the minimization of squared residuals r = Aξ – y by a parameter component ξ – ξ 0 yields the normal equations system  The parameter  denotes the regularization parameter. It balances the residual norm ||Aξ – y|| against the (reduced) parameter norm ||ξ – ξ 0 ||.  Accounts for both regularization and combination, which is just the so-called generalized Tykhonov regularization in the case of α = 1! When ξ 0 = 0, i.e. the a priori information consisting of null pseudo-observables, This is the case of regularization. Data combination in the spectral domain is achieved by incorporating non-trivial a priori information ξ 0  0, yielding the mixed estimator with additional information as stochastic linear restrictions.

17 Biased and unbiased estimations in different aspects Numerical Analysis Statistical aspects Standard Tykhonov-Phillips regularization Gen.Tykhonov-Phillips regularization Ridge regression, BLE and Light constraint solution BLUUE estimator of Mixed model This answer the relationship of these biased and unbiased solutions and estimators in numerical and statistical aspects. Biased Unbiased

18 4. Conclusions and further studies Development of a rigorous approach to minimum MSE adjustment in a Gauss-Markov Model, i.e. α -weighted S-homBLE; Derivation of a new method of determining the optimal regularization parameter  in uniform Tykhonov-Phillips regularization (  -weighted S- homBLE) by A-optimal design in the general case; It was, therefore, possible to translate the previous results for the α - weighted S-homBLE to the case of Tykhonov-Phillips regularization with remarkable success; The optimal ridge parameter k in ridge regression as developed by Hoerl and Kennard in 1970s is just the special case of our general solution by A- optimal design. Accounts for both regularization and combination, which is just the so- called generalized Tykhonov regularization!

19 In order to develop and promote the generality of inversion methods, it is necessary to study this kind problem from the following aspects: 1)Statistical or deterministic regularization; 2)Ridge estimation; 3)Best linear estimation; 4)Mixed model; 5)Biased or unbiased estimations; 6)The criterion in derivation of the inversion solution: Mean square error of the estimtes MSE - Gauss´s second approach instead of Gauss´s first approach 7) Optimal solution.

20 Laplace (1810) distinguishes between errors of observaions and errors of estimates, and points out that a theory of estimation should be based on a measure of derviation between the estimate and the true value. Gauss(1823) finally accepted Laplace’s criticism and indicates that if he were to rewrite the 1809 proof (LS), he would use the expected mean square error as optimality criterion. This means that estimation theory should be based on minimization of the error of estimation! Historical remark:

21

22

23

24

25 Thank you !