Presentation is loading. Please wait.

Presentation is loading. Please wait.

The good sides of Bayes Jeannot Trampert Utrecht University.

Similar presentations


Presentation on theme: "The good sides of Bayes Jeannot Trampert Utrecht University."— Presentation transcript:

1 The good sides of Bayes Jeannot Trampert Utrecht University

2 Bayes gives us an answer! Example of inner core anisotropy

3 Normal mode splitting functions are linearly related to seismic anisotropy in the inner core The kernels K α, K β and K γ are of different size, hence regularization affects the different models differently

4 Regularized inversion

5 Full model space search (NA, Sambridge 1999)

6 Resolves 20 year disagreement between body wave and normal mode data (Beghein and Trampert, 2003)

7 Bayes or not to Bayes? We need proper uncertainty analysis to interpret seismic tomography probability density functions for all model parameters

8 Do models agree? No knowledge of uncertainty implies subjective comparisons.

9

10 Partial knowledge of uncertainty allows hypothesis testing

11 Deschamps and Tackley, 2009

12 Mean density model separated into its chemical and temperature contributions (full pdf obtained with NA) Trampert et al, 2004)

13 Deschamps and Tackley, 2009

14 Full knowledge of uncertainty allows to evaluate the probability of overlap or consistency between models

15 What is uncertainty? Consider a linear problem where d are data, m the model, G partial derivatives and e the data uncertainty where m 0 is a starting model and L the linear inverse operator The estimated solution is

16 What is uncertainty? where (I-R) is the null-space operator This can be rewritten as Resulting in a formal statistical uncertainty expressed with covariance operators as

17 What is uncertainty?

18 How can we estimate uncertainty? ① Ignore it: should not be an option but is the common approach ② Try and estimate  m: Regularized extremal bound analysis (Meju, 2009) Null-space shuttle (Deal and Nolet, 1996) ③ Probabilistic tomography Neighbourghood algorithm (Sambridge, 1999) Metropolis (Mosegaard and Tarantola, 1995) Neural Networks (Meier et al., 2007)

19 The most general solution of an inverse problem (Bayes) Tarantola, 2005

20 A full model space search should estimate Exhaustive search Brute force Monte Carlo (Shapiro and Ritzwoller, 2002) Simulated Annealing (global optimisation with convergence proof) Genetic algorithms (global optimisation with no covergence proof) Neighbourhood algorithm (Sambridge, 1999) Sample  (m) and apply Metropolis rule on L(m). This will result in importance sampling of  (m) (Mosegaard and Tarantola, 1995) Neural networks (Meier et al., 2007)

21 The neighbourhood algorithm (NA): Sambridge 1999 Stage 1: Guided sampling of the model space. Samples concentrate in areas (neighbourhoods) of better fit.

22 The neighbourhood algorithm (NA): Stage 2: importance sampling Resampling so that sampling density reflects posterior 2D marginal1D marginal

23 Advantages of NA Interpolation in model space with Voronoi cells Relative ranking in both stages (less dependent on data uncertainty) Marginals calculated by Monte Carlo integration  convergence check Marginals are a compact representation of the seismic data and prior rather than a model

24 Example: A global mantle model Using body wave arrival times, surface wave dispersion measurements and normal mode splitting functions Same mathematical formulation

25 Mosca et al., 2011

26 Mosca et al., 2011

27 What does it all mean? Mineral physics will tell us! Thermo-chemical parameterization: Temperature Fraction of Pv (pPv) Fraction of total Fe

28 Example: Importance sampling using the Metropolis rule (Mosegaard and Tarantola, 1995)

29

30 Disadvantages of NA and Metropolis Works only on small linear and non-linear problems (less than ~50 parameters)

31 The neural network (NN) approach: Bishop 1995, MacKay 2003 A neural network can be seen as a non-linear filter between any input and output The NN is an approximation to a non-linear function g where d=g(m) Works on forward or inverse function A training set (contains the physics) is used to calculate the coefficients of the NN by non- linear optimisation

32 Properties of NN 1.Dimensionality is not a problem because NN approximates a function and not a data prediction! 2.Flexible: invert for any combination of parameters 3.1D or 2D marginal only

33 Mantle transition zone discontinuities

34 Probabilistic tomography using Bayes’ theorem is possible but challenges remain Control the prior and data uncertainty Full pdfs in high dimensions Interpret and visualize the information contained in the marginals


Download ppt "The good sides of Bayes Jeannot Trampert Utrecht University."

Similar presentations


Ads by Google