Mathematical/Numerical optimization. What are the effects of including correlated observation errors on the minimization? How does it affect the hessian.

Slides:



Advertisements
Similar presentations
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
Advertisements

Simple Regression. Major Questions Given an economic model involving a relationship between two economic variables, how do we go about specifying the.
Simple Linear Regression and Correlation
Nonlinear Regression Ecole Nationale Vétérinaire de Toulouse Didier Concordet ECVPT Workshop April 2011 Can be downloaded at
Basic geostatistics Austin Troy.
Multiple regression analysis
Statistics for the Social Sciences
1/20 Accelerating minimizations in ensemble variational assimilation G. Desroziers, L. Berre Météo-France/CNRS (CNRM/GAME)
Logistic Regression Rong Jin. Logistic Regression Model  In Gaussian generative model:  Generalize the ratio to a linear model Parameters: w and c.
Statistics for Business and Economics
Dimensional reduction, PCA
Lecture 5 Probability and Statistics. Please Read Doug Martinson’s Chapter 3: ‘Statistics’ Available on Courseworks.
Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods.
CSE 300: Software Reliability Engineering Topics covered: Software metrics and software reliability.
Role and Place of Statistical Data Analysis and very simple applications Simplified diagram of scientific research When you know the system: Estimation.
Example: Cows Milk Benefits –Strong Bones –Strong Muscles –Calcium Uptake –Vitamin D Have you ever seen any statistics on cow’s milk? What evidence do.
1 Optimal solution error covariances in nonlinear problems of variational data assimilation Victor Shutyaev Institute of Numerical Mathematics, Russian.
Sept 2003PHYSTAT41 Uncertainty Analysis (I) Methods.
Role and Place of Statistical Data Analysis and very simple applications Simplified diagram of a scientific research When you know the system: Estimation.
Classification and Prediction: Regression Analysis
EFFECT SIZE Parameter used to compare results of different studies on the same scale in which a common effect of interest (response variable) has been.
Respected Professor Kihyeon Cho
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
A Neural Network MonteCarlo approach to nucleon Form Factors parametrization Paris, ° CLAS12 Europen Workshop In collaboration with: A. Bacchetta.
Ch4 Describing Relationships Between Variables. Pressure.
© 2001 Prentice-Hall, Inc. Statistics for Business and Economics Simple Linear Regression Chapter 10.
Slide 1 © 2002 McGraw-Hill Australia, PPTs t/a Introductory Mathematics & Statistics for Business 4e by John S. Croucher 1 n Learning Objectives –Identify.
Ch4 Describing Relationships Between Variables. Section 4.1: Fitting a Line by Least Squares Often we want to fit a straight line to data. For example.
Specifying Instantaneous Currents and Electric Fields in the High-Latitude Ionosphere FESD-ECCWES Meeting, 21 July 20141/15 Ellen Cousins 1, Tomoko Matsuo.
The “ ” Paige in Kalman Filtering K. E. Schubert.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Robust System Design Session #11 MIT Plan for the Session Quiz on Constructing Orthogonal Arrays (10 minutes) Complete some advanced topics on OAs Lecture.
Ronan McNulty EWWG A general methodology for updating PDF sets with LHC data Francesco de Lorenzi*, Ronan McNulty (University College Dublin)
Regression Understanding relationships and predicting outcomes.
Some Aspects of Bayesian Approach to Model Selection Vetrov Dmitry Dorodnicyn Computing Centre of RAS, Moscow.
Quality of model and Error Analysis in Variational Data Assimilation François-Xavier LE DIMET Victor SHUTYAEV Université Joseph Fourier+INRIA Projet IDOPT,
BE, gen_be and Single ob experiments Syed RH Rizvi National Center For Atmospheric Research NCAR/MMM, Boulder, CO-80307, USA Syed.
Z bigniew Leonowicz, Wroclaw University of Technology Z bigniew Leonowicz, Wroclaw University of Technology, Poland XXIX  IC-SPETO.
The chi-squared statistic  2 N Measures “goodness of fit” Used for model fitting and hypothesis testing e.g. fitting a function C(p 1,p 2,...p M ; x)
July, 2009 WRF-Var Tutorial Syed RH Rizvi 0 WRFDA Background Error Estimation Syed RH Rizvi National Center For Atmospheric Research NCAR/ESSL/MMM, Boulder,
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Interpolation and evaluation of probable Maximum Precipitation (PMP) patterns using different methods by: tarun gill.
Psychology 202a Advanced Psychological Statistics October 22, 2015.
Principal Component Analysis Zelin Jia Shengbin Lin 10/20/2015.
Data assimilation for weather forecasting G.W. Inverarity 06/05/15.
Ch 8 Estimating with Confidence 8.1: Confidence Intervals.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
More on regression Petter Mostad More on indicator variables If an independent variable is an indicator variable, cases where it is 1 will.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 23: Linear Support Vector Machines Geoffrey Hinton.
The accuracy of averages We learned how to make inference from the sample to the population: Counting the percentages. Here we begin to learn how to make.
More about tests and intervals CHAPTER 21. Do not state your claim as the null hypothesis, instead make what you’re trying to prove the alternative. The.
LECTURE 13: LINEAR MODEL SELECTION PT. 3 March 9, 2016 SDS 293 Machine Learning.
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course Mar 2016.
Estimating standard error using bootstrap
Multiscale Ensemble Filtering in Reservoir Engineering Applications
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
…Don’t be afraid of others, because they are bigger than you
Everyone thinks they know this stuff
Techniques for studying correlation and covariance structure
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Exercise Find quality estimates for round 3 (Estonia) items B37, B38 and B39. If observed correlations between B37 and B38 is .439; B37 and B39 is .441;
Chapter 13 - Confidence Intervals - The Basics
#21 Marginalize vs. Condition Uninteresting Fitted Parameters
Principal Component Analysis
Chapter 14 - Confidence Intervals: The Basics
P2.5 Sensitivity of Surface Air Temperature Analyses to Background and Observation Errors Daniel Tyndall and John Horel Department.
Optimization under Uncertainty
Presentation transcript:

Mathematical/Numerical optimization

What are the effects of including correlated observation errors on the minimization? How does it affect the hessian conditionning – It affects the eigen spectrum. We are making the observation very accurate. – It would be interesting to identify the eigenvectors corresponding to these very accurate obs. – How does the eigenvalues and vectors of the hessian change when we account for correlated obs – Sensitivity of scales in B respect to those in R – Special case when the correlation does not decrease in space/or time (e.g. diurnal cycle), how does it affect the above? How does it affect the statistics in ensemble methods?

How should we regularize R to improve the numerical behaviour of the problem? Isn’t it dangerous to fiddle with the statistical just to improve the numerical aspects? – It is a matter of balance between accuracy and computing time – Should we be that confident to the diagnose R anyway? – It is probably not too harmful to bump-up the std dev, if they are quite small. There is already a literature on covariance regularization (e. g. in finance), maybe we should look into it. Should we use raw estimates or try to fit a correlation function?

What preconditioning techniques should we use? Do we need to completely rethink the whole preconditioning? – The second level preconditioning should still apply – Probably very different answers depending on the correlation structures in R