Multi-Processing Least Squares Collocation: Applications to Gravity Field Analysis. Kaas. E., B. Sørensen, C. C. Tscherning, M. Veicherts.

Slides:



Advertisements
Similar presentations
Autocorrelation and Heteroskedasticity
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Elementary Linear Algebra Anton & Rorres, 9th Edition
ARCGICE WP 4.3 Recommendations for inclusion of GOCE data C.C.Tscherning & S.Laxon C.C.Tscherning, UCPH, S.Laxon, UCLA,
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Numerical Algorithms Matrix multiplication
Chapter 12 Simple Linear Regression
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
Curve-Fitting Regression
CHAMP Gravity Field Models using Precise Orbits by C.C.Tscherning & E.Howe Department of Geophysics University of Copenhagen, Denmark 2. CHAMP Science.
Pipelined Computations Divide a problem into a series of tasks A processor completes a task sequentially and pipes the results to the next processor Pipelining.
Chapter 3 Solving Equations
C.C.Tscherning & M.Veicherts, University of Copenhagen, Jan Accelerating generalized Cholesky decomposition using multiple processors.
ARCGICE WP 1.4 ERROR ESTIMATES IN SPATIAL AND SPECTRAL DOMAINS C.C.Tscherning, University of Copenhagen,
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 15 Solution of Systems of Equations.
Ordinary least squares regression (OLS)
ARCGICE WP 5.2 Plan for development of Atctic geoid using GOCE C.C.Tscherning, University of Copenhagen,
Dense Matrix Algorithms CS 524 – High-Performance Computing.
ARCGICE WP 5.1 Leader: UCL Synthesis report on benefit of combining data C.C.Tscherning, University of Copenhagen,
EXAMPLE 1 Apply the distributive property
Chapter 6 (cont.) Regression Estimation. Simple Linear Regression: review of least squares procedure 2.
Chapter 4 Two-Variables Analysis 09/19-20/2013. Outline  Issue: How to identify the linear relationship between two variables?  Relationship: Scatter.
Modeling Airborne Gravimetry with High-Degree Harmonic Expansions Holmes SA, YM Wang, XP Li and DR Roman National Geodetic Survey/NOAA Vienna, Austria,
BIOL 582 Lecture Set 19 Matrices, Matrix calculations, Linear models using linear algebra.
Stat13-lecture 25 regression (continued, SE, t and chi-square) Simple linear regression model: Y=  0 +  1 X +  Assumption :  is normal with mean 0.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
The Greatest Common Factor; Factoring by Grouping
1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Chapter 12 Multiple Linear Regression Doing it with more variables! More is better. Chapter 12A.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
C.C.Tscherning, University of Copenhagen, Denmark. Developments in the implementation and use of Least-Squares Collocation. IAG Scientific Assembly, Potsdam,
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
M. Herceg and C.C.Tscherning, University of Copenhagen Evaluation of Least-Squares Collocation and the Reduced Point Mass method using the International.
Environmental Modeling Basic Testing Methods - Statistics III.
Regional Enhancement of the Mean Dynamic Topography using GOCE Gravity Gradients Matija Herceg 1 and Per Knudsen 1 1 DTU Space, National Space Institute,
Section 10.6 Factoring Objectives: Factor a quadratic expression of the form Solve quadratic equations by factoring.
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
International Symposium on Gravity, Geoid and Height Systems GGHS 2012, Venice, Italy 1 GOCE data for local geoid enhancement Matija Herceg Per Knudsen.
C.C.Tscherning, Niels Bohr Institute, University of Copenhagen. Improvement of Least-Squares Collocation error estimates using local GOCE Tzz signal standard.
Psychology 202a Advanced Psychological Statistics October 22, 2015.
Adam Q. Colley EMIS8381 March 31,  One Dependent Variable  One or more (n) Independent Variables  (Independent to the Dependent Variable)  More.
Regression Analysis Deterministic model No chance of an error in calculating y for a given x Probabilistic model chance of an error First order linear.
MAT 2401 Linear Algebra 2.5 Applications of Matrix Operations
Economics 310 Lecture 21 Simultaneous Equations Three Stage Least Squares A system estimator. More efficient that two-stage least squares. Uses all information.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
Calculating the Least Squares Regression Line Lecture 40 Secs Wed, Dec 6, 2006.
Mayer-Gürr et al.ESA Living Planet, Bergen Torsten Mayer-Gürr, Annette Eicker, Judith Schall Institute of Geodesy and Geoinformation University.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Operations and equations
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 26: Cholesky and Singular Value.
A ij i = row j = column A [ A ] Definition of a Matrix.
4.Results (1)Potential coefficients comparisons Fig.3 FIR filtering(Passband:0.005~0.1HZ) Fig.4 Comparison with ESA’s models (filter passband:0.015~0.1HZ)
D.N. Arabelos, M. Reguzzoni and C.C.Tscherning HPF Progress Meeting # 26, München, Feb , Global grids of gravity anomalies and vertical gravity.
Chapter 11 Linear Regression and Correlation. Explanatory and Response Variables are Numeric Relationship between the mean of the response variable and.
Improvement to Hessenberg Reduction
ASEN 5070: Statistical Orbit Determination I Fall 2014
Ch12.1 Simple Linear Regression
Statistics in Applied Science and Technology
Precisions of Adjusted Quantities
D. Rieser *, R. Pail, A. I. Sharov
5.4 General Linear Least-Squares
Calculating the Least Squares Regression Line
Daniel Rieser, Christian Pock, Torsten Mayer-Guerr
Calculating the Least Squares Regression Line
Calculating the Least Squares Regression Line
Presentation transcript:

Multi-Processing Least Squares Collocation: Applications to Gravity Field Analysis. Kaas. E., B. Sørensen, C. C. Tscherning, M. Veicherts

Introduction LSC is used for gravity field modeling. This includes the determination of parameters and the estimation of errors. The quantity to be modeled is the so-called anomalous potential T. (A2)

Remove-Restore An EGM is removed and later restored: The change of summation order enables the use of multiprocessing of a sum for each order. (1)

Timing Results N – max. degree Number of processors Time (s) Summation times for EGM2008 for 26 points (at different latitude) including read overhead of coefficients.

Covariance computation, C, C P Computation of N*(N+1)/2 covariances. Time in seconds as a function of number of processors and of block-size k*k. N Processors Blocksize, k. s s s s

Solution of Equations Upper triangular part divided in blocks, collected in ”Chunks” C ij

Cholesky reduction Row-wise Inner sum over block k, Outer sum over all blocks m/b in a column of blocks of size b.

Timing of Cholesky reduction: OMP N Processors Blocksize, k

Time depends on block-size and number of working processors

MPI timing ???

Conclusion Standard software for Cholesky reduction can not be used in the general setting, where also parameters are unknowns: Summation in the reduction must be changed from positive accumulation to negative accumulation. Use of OMP and MPI makes LSC feasible even for very large number of data. Both covariance computation and Cholesky reduction becomes much faster.