Retrieval Theory Mar 23, 2008 Vijay Natraj. The Inverse Modeling Problem Optimize values of an ensemble of variables (state vector x ) using observations:

Slides:



Advertisements
Similar presentations
Data-Assimilation Research Centre
Advertisements

Variational data assimilation and forecast error statistics
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
TNO orbit computation: analysing the observed population Jenni Virtanen Observatory, University of Helsinki Workshop on Transneptunian objects - Dynamical.
Lecture 3 Probability and Measurement Error, Part 2.
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
1 MF-852 Financial Econometrics Lecture 2 Matrix Operations in Econometrics, Optimization with Excel Roy J. Epstein Fall 2003.
1 A Stochastic Pursuit-Evasion Game with no Information Sharing Ashitosh Swarup Jason Speyer Johnathan Wolfe School of Engineering and Applied Science.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Page 1 1 of 20, EGU General Assembly, Apr 21, 2009 Vijay Natraj (Caltech), Hartmut Bösch (University of Leicester), Rob Spurr (RT Solutions), Yuk Yung.
REMOTE SENSING & THE INVERSE PROBLEM “Remote sensing is the science and art of obtaining information about an object, area, or phenomenon through the analysis.
Lecture II-2: Probability Review
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10.
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Geo479/579: Geostatistics Ch13. Block Kriging. Block Estimate  Requirements An estimate of the average value of a variable within a prescribed local.
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
GLOBAL MODELS OF ATMOSPHERIC COMPOSITION Daniel J. Jacob Harvard University.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
IID Samples In supervised learning, we usually assume that data points are sampled independently and from the same distribution IID assumption: data are.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Overview of Techniques for Deriving Emission Inventories from Satellite Observations Frascati, November 2009 Bas Mijling Ronald van der A.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Maximum Likelihood Estimation and Simplified Kalman Filter tecniques for real time Data Assimilation.
INVERSE MODELING OF ATMOSPHERIC COMPOSITION DATA Daniel J. Jacob See my web site under “educational materials” for lectures on inverse modeling atmospheric.
Inverse modeling in a linear algebra framework State vector x (dimension n) Observation vector y (dimension m) correlations between vector elements Use.
INVERSE MODELING TECHNIQUES Daniel J. Jacob. GENERAL APPROACH FOR COMPLEX SYSTEM ANALYSIS Construct mathematical “forward” model describing system As.
Quality of model and Error Analysis in Variational Data Assimilation François-Xavier LE DIMET Victor SHUTYAEV Université Joseph Fourier+INRIA Projet IDOPT,
Turner and Ebell, 2013 DOE/EU Retrieval Workshop, Köln Retrieval Algorithm Frameworks Dave Turner NOAA National Severe Storms Laboratory Kerstin Ebell.
Bayes Theorem The most likely value of x derived from this posterior pdf therefore represents our inverse solution. Our knowledge contained in is explicitly.
An Introduction To The Kalman Filter By, Santhosh Kumar.
An Introduction to Optimal Estimation Theory Chris O´Dell AT652 Fall 2013.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
CO 2 retrievals from IR sounding measurements and its influence on temperature retrievals By Graeme L Stephens and Richard Engelen Pose two questions:
Recursive Least-Squares (RLS) Adaptive Filters
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
Geology 5670/6670 Inverse Theory 28 Jan 2015 © A.R. Lowry 2015 Read for Fri 30 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares: Uncertainty The.
Lecture II-3: Interpolation and Variational Methods Lecture Outline: The Interpolation Problem, Estimation Options Regression Methods –Linear –Nonlinear.
Error correlation between CO 2 and CO as a constraint for CO 2 flux inversion using satellite data from different instrument configurations Helen Wang.
Inverse Modeling of Surface Carbon Fluxes Please read Peters et al (2007) and Explore the CarbonTracker website.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
RESULTS: CO constraints from an adjoint inversion REFERENCES Streets et al. [2003] JGR doi: /2003GB Heald et al. [2003a] JGR doi: /2002JD
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
An Optimal Estimation Spectral Retrieval Approach for Exoplanet Atmospheres M.R. Line 1, X. Zhang 1, V. Natraj 2, G. Vasisht 2, P. Chen 2, Y.L. Yung 1.
Comparison of adjoint and analytical approaches for solving atmospheric chemistry inverse problems Monika Kopacz 1, Daniel J. Jacob 1, Daven Henze 2, Colette.
Estimating emissions from observed atmospheric concentrations: A primer on top-down inverse methods Daniel J. Jacob.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Potential of Observations from the Tropospheric Emission Spectrometer to Constrain Continental Sources of Carbon Monoxide D. B. A. Jones, P. I. Palmer,
ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course Mar 2016.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Probability Theory and Parameter Estimation I
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
SIMULATED OBSERVATION OF TROPOSPHERIC OZONE AND CO WITH TES
Adjoint modeling and applications
Ch3: Model Building through Regression
Filtering and State Estimation: Basic Concepts
Error statistics in data assimilation
Hartmut Bösch and Sarah Dance
Presentation transcript:

Retrieval Theory Mar 23, 2008 Vijay Natraj

The Inverse Modeling Problem Optimize values of an ensemble of variables (state vector x ) using observations: a priori estimate x a +  a Measurement vector y Forward model y = F(x) +  “MAP solution” “optimal estimate” “retrieval” Bayes’ theorem

Applications for Atmospheric Concentration Retrieve atmospheric concentrations (x) from observed atmospheric radiances (y) using a radiative transfer (RT) model as forward model Invert sources (x) from observed atmospheric concentrations (y) using a chemical transport model (CTM) as forward model Construct a continuous field of concentrations (x) by assimilation of sparse observations (y) using a forecast model (initial-value CTM) as forward model

Optimal Estimation Forward problem typically not linear No analytical solution to express state vector in terms of measurement vector Approximate solution by linearizing forward model about reference state x 0 K: weighting function (Jacobian) matrix K describes measurement sensitivity to state.

Optimal Estimation Causes of non-unique solutions  m > n (more measurements than unknowns)  Amplification of measurement and/or model noise  Poor sensitivity of measured radiances to one or more state vector elements (ill-posed problem) Need to use additional constraints to select acceptable solution (e.g., a priori)

Bayes’ Theorem a priori pdfobservation pdf normalizing factor (unimportant) a posteriori pdf Maximum a posteriori (MAP) solution for x given y is defined by solve for P(x,y)dxdy  Bayes’ theorem

Gaussian PDFs Scalar x Vector where S a is the a priori error covariance matrix describing error statistics on ( x-x a ) In log space: Similarly:

Maximum A Posteriori (MAP) Solution Bayes’ theorem: MAP solution: minimize cost function J : Solve for Analytical solution: with gain matrix bottom-up constraint top-down constraint

Averaging Kernel A describes the sensitivity of retrieval to true state and hence the smoothing of the solution: smoothing error retrieval error MAP retrieval gives A as part of the retrieval: Sensitivity of retrieval to measurement

Degrees of Freedom Number of unknowns that can be independently retrieved from measurement DFS = n: measurement completely defines state DFS = 0: no information in the measurement