Using General Inverse Theory to Calibrate the Island Recharge Problem in EXCEL.

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

A Momentum-based Bipedal Balance Controller Yuting Ye May 10, 2006.
Solving Linear Systems (Numerical Recipes, Chap 2)
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Final Project I. Calibration II.Drawdown Prediction III.Particle Tracking IV.Presentation of Results.
Motion Analysis Slides are from RPI Registration Class.
LU method 1) factor (decompose) A into L and U 2) given b, determine d 3) using Ux=d and backsubstitution, solve for x Advantage: Once you have L and U,
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
CS CS 175 – Week 2 Processing Point Clouds Local Surface Properties, Moving Least Squares.
Curve-Fitting Regression
NOTES ON MULTIPLE REGRESSION USING MATRICES  Multiple Regression Tony E. Smith ESE 502: Spatial Data Analysis  Matrix Formulation of Regression  Applications.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Linear and generalised linear models
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
The Calibration Process
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Least-Squares Regression
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
Making Models from Data A Basic Overview of Parameter Estimation and Inverse Theory or Four Centuries of Linear Algebra in 10 Equations Rick Aster Professor.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
IV. Sensitivity Analysis for Initial Model 1. Sensitivities and how are they calculated 2. Fit-independent sensitivity-analysis statistics 3. Scaled sensitivities.
V. Nonlinear Regression By Modified Gauss-Newton Method: Theory Method to calculate model parameter estimates that result in the best fit, in a least squares.
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Matrices NamingCalculatorApplication. Making & Naming a Matrix Matrix A.
Curve-Fitting Regression
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
9. Testing Model Linearity Modified Beale’s Measure (p ) Total model nonlinearity (p. 144) Intrinsic model nonlinearity (p. 145) For total and.
Calibration & Sensitivity Analysis. Head measured in an observation well is known as a target. Baseflow measurements or other fluxes (e.g., ET) are also.
Variations on Backpropagation.
Singular value decomposition (SVD) – a tool for VLBI simulations Markus Vennebusch VLBI – group of the Geodetic Institute of the University of Bonn M.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
2.5 – Determinants and Multiplicative Inverses of Matrices.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 26: Cholesky and Singular Value.
Section 6-2: Matrix Multiplication, Inverses and Determinants There are three basic matrix operations. 1.Matrix Addition 2.Scalar Multiplication 3.Matrix.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
AUTOMATED PARAMETER ESTIMATION Model Parameterization, Inverse Modeling, PEST.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
Review Problems Matrices
The Calibration Process
Chapter 6 Calibration and Application Process
Lecture 8:Eigenfaces and Shared Features
Singular Value Decomposition
RECORD. RECORD Gaussian Elimination: derived system back-substitution.
Linear regression Fitting a straight line to observations.
Nonlinear regression.
Determinants 2 x 2 and 3 x 3 Matrices.
Applications of Matrices
Nonlinear Fitting.
The loss function, the normal equation,
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
A square matrix is a matrix with the same number of columns as rows.
Unfolding with system identification
Determinants 2 x 2 and 3 x 3 Matrices.
Presentation transcript:

Using General Inverse Theory to Calibrate the Island Recharge Problem in EXCEL

What I hope to do with this talk… Provide enough general theory so folks will be better prepared to tackle manuals for automated calibration programs such as PEST Demonstrate a simple tool to help illustrate some of these ideas

Why is it called inverse modeling? The forward model –solve for heads given R and K: A is called the sensitivity matrix (m x n) x is the parameter (or upgrade) vector (m x 1) b is the observation (or residual) vector (n x 1) m is the number of observations n is the number of parameters

Why is it called inverse modeling? The inverse model –Solve for K and R given a set of measured heads

So what is the sensitivity matrix? Sensitivities are the derivatives of the groundwater flow equation with respect to parameters PEST and UCODE approximate these derivatives using finite differences –No different than how we have been approximating the groundwater flow equation –Called perturbation sensitivities

So what is the sensitivity matrix? Both PEST and UCODE approximate the derivatives using forward and central differences PEST provides a few more options including a best-fit or the slope of a polynomial fit to the three points p(1+derinc)pp(1-derinc) calculated head

So what is the sensitivity matrix? One row for each observation One column for each parameter

So how do we invert A?

UCODE: –Modified Gauss-Newton (small problems) –Parameter Parsimony (Arbitrary zonation???) PEST: –Modified Gauss-Newton –SVD (medium) –LSQR! (large) Both programs can make use of: Damping (Marquardt parameter)

So how do we invert A? Modified Gauss-Newton: –Can only invert square matrices –Called the normal equation; A T A is a square n x n matrix –As the number of parameters increases the likelihood of A T A being singular (not invertible) increases

So how do we invert A? Singular-Value Decomposition (SVD): –Decomposes a matrix according to: –Finds the pseudo-inverse of any matrix (square or not) according to: –Can solve for several thousands of parameters given thousands of observations before it just takes too long to be practical

Observation and sensitivity weights Can be used to scale observations or sensitivities so they are all within the same order of magnitude The normal equation becomes:

Marquardt-Lambda I am not going to get into this as I don’t fully understand it yet myself; I just know its needed for some problems to work (even 2x2 problems) The normal equation becomes: This is the form of the equation the EXCEL file solves for a simple 1x2 or 2x2 problem

Weighted Least-Squares: or And finally the objective function

EXCEL Demo 1.Estimate R and K given 1-head observation 2.Estimate R and K given 1-head and 1-flux observation i)Without observation weights (Q=1) ii)With observation weights 3.Estimate R and K given 1-head observation with scaling of the sensitivity matrix