Presentation is loading. Please wait.

Presentation is loading. Please wait.

Matt Gerstenberger, GNS Science

Similar presentations


Presentation on theme: "Matt Gerstenberger, GNS Science"— Presentation transcript:

1 Matt Gerstenberger, GNS Science
Earthquake forecasting & hazard: mathematical models or making stuff up Matt Gerstenberger, GNS Science

2 “All models are wrong but some are useful”
- George Box, English statistician

3 I will discuss how we make them useful and understand if they are.

4 I will discuss how we make them useful and understand if they are.
Short summary: understanding, quantifying and modelling uncertainty matters.

5 NZ Earthquakes: A very busy last decade!
2003 Mw 7.2 Fiordland 2004 Mw 7.0 Fiordland 2007 Mw6.7 Gisborne 2009 Mw7.8 Dusky Sound 2010 Mw 7.2 Darfield (Canterbury) 2011 Mw 6.2 Christchurch 2011 Mw 6.0, 5.9, 5.7 Christchurch 2013 Mw 6.5, 6.6, 6.2 Wellington Region 2014 Mw 6.1 Wellington Region 2015 Mw 6.0 Arthur’s Pass 2016 Mw 5.7 Christchurch (Feb) 2016 Mw 7.1 East Cape (Sept) 2016 Mw 7.8 Kaikoura (Nov)

6 NZ Earthquakes: A very busy last decade!
2003 Mw 7.2 Fiordland 2004 Mw 7.0 Fiordland 2007 Mw6.7 Gisborne 2009 Mw7.8 Dusky Sound 2010 Mw 7.2 Darfield (Canterbury) 2011 Mw 6.2 Christchurch 2011 Mw 6.0, 5.9, 5.7 Christchurch 2013 Mw 6.5, 6.6, 6.2 Wellington Region 2014 Mw 6.1 Wellington Region 2015 Mw 6.0 Arthur’s Pass 2016 Mw 5.7 Christchurch (Feb) 2016 Mw 7.1 East Cape (Sept) 2016 Mw 7.8 Kaikoura (Nov)

7 Overview Some Definitions Subjectivity vs Objectivity Forecasting Models Hazard Models (if there is time) Expert Elicitation Model Testing

8 Definitions Forecasting: probabilistic information of what earthquakes may occur in a “space-time-earthquake magnitude” window Prediction: precise and exact information. “A magnitude 6 earthquake will occur at this location in the next week” We can forecast with reliability. All predictions have been shown to be no better than, or worse than random.

9 Hazard: forecasts of how much the ground might shake
Definitions Forecasting: provides information about numbers and magnitudes of earthquakes Hazard: forecasts of how much the ground might shake Risk: forecasts of the impact of an earthquake, for example building damage or dollars of loss 8% probability of M>7

10 Definitions Magnitude: model of the energy released at a fault rupture. There are multiple models available, each measures slightly different energy. Intensity: observations of how much the ground shakes at different locations An earthquake has only a single magnitude, but the intensity will change depending on the location of the observation. If Watts on a light bulb is considered like Magnitude, then the Intensity is how bright it is at different locations.

11 2016 Kaikoura Earthquake: Magnitude 7.8
Intensity: Spatially variable

12 Subjective vs Objective Models
Ideally all forecasts would be based on objective maths and physics based models However, we currently lack sufficient models to be able to do this. Additionally, sometimes we lack information to fully objectively parameterise the models. I will discuss aspects of both objective and subjective forecast model building

13 Forecasting Aftershocks
Post-Kaikoura Earthquake Forecast Statistical models Based on some of the oldest most well understood relations in earthquake science Derived from earthquake “catalogues” (e.g., from GeoNet) of magnitudes, locations, and time of earthquakes Parameterised and tested in many regions around the world Skillful and reliable models

14 Magnitude-Frequency Distribution of Earthquakes
Gutenberg-Richter relationship (1944): Cumulative number of events Magnitude Two random groups of earthquakes LogN = a-bM N – number of events greater than magnitude M a – relative seismicity rate b – ratio of large events to small events Low b = relatively more big events High b = relatively more small events Any random selection of earthquakes shows this property, It’s useful for understanding how many earthquakes to expect

15 Aftershock occurrence rate Number of felt Aftershocks (5,464 of them)
Nobi, Japan earthquake, years of aftershocks! Number of Aftershocks Time since main shock Fusakichi Omori ( ) Aftershock occurrence rate Number of felt Aftershocks (5,464 of them) Imperial University of Tokyo

16 Aftershock Productivity and Decay
Modified Omori Law R(t) = k / (t+c) p The decrease in the rate of aftershock activity with time. k – productivity; t – time; c – time delay constant; p – decay exponent High p – relatively fast decrease in earthquake activity Low p – relatively slow decrease in earthquake activity

17 foreshock with subsequent mainshock and aftershock sequence
earthquake: no subsequent mainshock Probability background Magnitude Time

18 Epidemic Type Aftershock Model - ETAS
Magnitude range Time, interevent-time & decay rate Background non-clustering Productivity Magnitude-dependent scaling A stochastic point-process model that treats aftershocks as epidemics (kind of family like) Superimposed “Omori” Sequences Parameters are not independent! Forecasts are created by random simulations Uncertainties on parameters are included in the simulations Numerous freely available computer codes (R, Matlab, etc)

19 Earthquake rates: power law decay during aftershock clustering
Rates of all magnitudes decay at the same rate Omori-Utsu decay using average New Zealand aftershock parameters. Note that on a logarithmic scale, the rate is almost constant through time. Days post mainshock 0.01 – 0.1 0.1 – 1 1-10 10-100 100- 1000 1000- 10000 Years post mainshock 2.7 – 27 M≥5.0 5 9 8 6 M≥4.0 55 97 94 81 70 59 50 M≥3.0 592 1038 1011 874 745 634 539

20 ETAS forecasts for Kaikoura
ETAS model forecasts, based on simulations of generations of aftershocks, are not Poissonian – the uncertainty is much larger.

21 No single model captures the uncertainty in our knowledge of earthquake occurrence.

22 Optimal forecasts are provided by combining multiple models that capture different physical hypotheses and different data sets.

23 Hybrid models: methods of combining multiple models
Objective model hybrids Model contribution to hybrid based on statistical testing Akaike Information Criterion: optimal fits to data used to parameterise model, penalties for increasing number of parameters Likelihood based parameterization, and tested against independent dataset Additive and Multiplicative Hybrids Key problem: we have limited and biased datasets to test upon. Earthquake datasets are not random samples because earthquake processes take 1,000s of years and we have a limited sample (we also have limited models). Subjective model hybrids Model contribution based on expert judgment Additive and Multiplicative hybrid (normally additive) Key problem: how much can we trust the experts? Frustrated experts

24 We also include what we know about mapped faults

25 Multiplicative hybrids: Monotone increasing multiplier function
Original models Transformation Cell rate -> multiplier Hybrid model Earthquake based model GPS based model Total cell rate A (summed over magnitudes) Based on optimally performing model

26 Measuring deformation in the crust with GPS: Long-term strain rate data
Shear strain rate (SSR) Rotational strain rate (RSR) Dilatational strain rate (DSR)

27 Hybrid models of earthquake rates from long-term strain rates and earthquake catalogue model
Strain rate data to end of 2011; Earthquake data (M > 4.95)

28 The more to the right, the better a model is
Optimising Model Combination Independent Testing of Model Combinations Best model Best previous model Best model Best previous model

29 Expert Judgement Models: how do we know if we can trust the experts?
(recent science discussion)

30 Expert Judgement Models: how do we know if we can trust the experts?
We test them!

31 In eliciting expert judgment, experts are asked to answer extremely difficult questions for which the true answer may never be known. Groups of experts almost always out perform the best individual experts. Calibrated expert judgment almost always out performs simple averaging When using judgement from experts it is important to understand: How well are they able to estimate answers on the general subject matter Uncertainty is important, as it will always be considerable We therefore need to quantify and minimise expert over-confidence

32 Our method for combining individual expert judgement:
A quantitative mathematical model Based on “strictly proper scoring rules” It has its roots in scientific hypothesis testing and each expert is treated as a hypothesis: (H): A random draw from the expert’s quantiles will give the correct answer; i.e., the expert is well calibrated The method is designed to test this hypothesis Each experts receive a relative weight based on how well calibrated they are

33 The experts are asked a question phrased like “Provide your estimated range (10th, 50th and 90th percentile) for Question X.” 50th is “best estimate” and 80% credible range For a “well calibrated expert”, over a number of responses, 80% of the true answers should fall in the “credible interval”. More specifically, 10% should fall in the 0-10% range, 40% in the 10-50% range, 40% in the 50-90% range, and 10% in the % range. 10% 40%

34 Over-confident! Too many
For example: For 20 questions, a perfectly calibrated expert will have 2 true answers in their 0-10% and also in their % ranges; they will then have 8 each in their 10-50% and 50-90% ranges. Over-confident! Too many answers in the tails Well calibrated 10% 40% 40% 40% 10% 10%

35 And now for some example questions.

36 And now for some example questions.
Please get out your pencil and paper, it’s time for an exam!

37 First an example: What is the shortest distance between Wellington and Tokyo? Please provide your best estimate and 80% credible range.

38 First an example: What is the shortest distance between Wellington and Tokyo? Please provide your best estimate and 80% credible range. I know its usually around a 10 hour flight, and I think trans-pacific flights might fly close to 1000km/hr? but…. I have no idea how direct that route is, but I guess they make some deviations to certain way points. Soooooo…. Best guess of 10,000km and it is probably more likely to be shorter rather than longer, and I could be seriously wrong on the speed. 10% bound 7,000km best guess 10,000km % bound 11,000km

39 9,267km (according to Dr. Google)
My accuracy was pretty reasonable and useful But my large uncertainty bounds reduced the value of my input and would reduce my relative weight

40 How many prime numbers are there between 500 & 700
Your turn! On July 1, 2018, what was the number of secondary school students enrolled in New Zealand? (excl. correspondence schools) From educationcounts.govt.nz How many prime numbers are there between 500 & 700 Please provide your best estimate and 80% credible range. No cheating 

41 295,458 30 503,509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,661,673,677,683,691

42 12 experts: Weighted solution, Best guess with 80% Optimised across
What percentage of total aftershocks within 50 years occur within the first year following a main shock? A: 68% 12 experts: Best guess with 80% confidence bounds Weighted solution, Optimised across Many questions

43 - Carl Sagan (probably never said that)

44

45 Testing probabilistic models is challenging:
One, or even a handful of earthquakes is not enough to make a meaningful statement on the reliability of a model. i.e., it’s not enough to test a model Numerous earthquakes in numerous situations are required Prospective testing on independent data provides the most informative results Statistical earthquake forecast testing results are almost always complicated We use many metrics One common globally accepted standard is the Information Gain Per Earthquake Forecast number of earthquakes with Its uncertainty If too many observations are low likelihood (tails), the model is probably not very good

46

47 Evaluation of earthquake stochastic models based on their real-time forecasts: a case study of Kaikoura 2016 D.S Harte Geophys. J. Int. 217(3), June 2019

48

49 Some final thoughts Earthquake forecasting is challenging due to the slow rate of earthquake occurrence, and poor quality and limited data Nevertheless, we have forecast models that have demonstrated reliable and useful skill for government and private decisions and for the public Rigorous testing is statistically and conceptually challenging and no prediction model has ever demonstrated skill Current models are almost entirely maths and statistics based, but heavily informed by physical understanding Current work is developing earthquake simulators: these are models of the earths crust (with faults) that we can run for “millions of years” and generate millions of years of earthquakes to help us understand how earthquake occurrence works.

50

51 Average Regional Magnitude
Beyond aftershock clustering: long-time and long-range clustering of earthquakes Forecast Magnitude Forecast Time Window Forecast Area Average Regional Magnitude Scaling relations developed based on global earthquake datasets describing long range clustering

52 Model rate density where λ 0 is a baseline rate density, η is a normalising function and wi is a weighting factor and f, g, & h probability densities: Time (lognormal) Magnitude (normal) Space (bivariate normal)

53 Multiplicative hybrid models: how we combine models
Fit multiplier as monotone increasing function of gridded covariates P ={ρi(j), i = 1,…, m} applied to a baseline model with rates λ0(j,k), where j ranges over spatial cells and k over magnitude bins (Rhoades et al , 2015). So 2m + 1 parameters are fitted (a, bi ≥ 0, ci ≥ 0, i = 1,…,m).

54 Approx 1300yrs prior to that 900 years prior to 1855 earthquake Since 1855 earthquake ~18m Wairarapa Fault, New Zealand, M Max horizontal offset ~18m; vertical ~7m Paeroa fault


Download ppt "Matt Gerstenberger, GNS Science"

Similar presentations


Ads by Google