Download presentation
Presentation is loading. Please wait.
Published byDoris McCormick Modified over 8 years ago
1
1 Department of Earth & Planetary Sciences and Institute for Policy Research, Northwestern University, Evanston, Illinois, USA 2 Department of Statistics and Institute for Policy Research, Northwestern University, Evanston, Illinois, USA Seth Stein 1, Edward Brooks 1, Bruce D. Spencer 2 Metrics, Bayes, And BOGSAT: How To Assess And Revise Earthquake Hazard Maps
2
Why do maps sometimes do poorly? How can we measure their performance? When and how should we update them? How can we quantify their uncertainties? How good do they have to be useful? How can we improve their forecasts? How do we make sensible policy given their limitations? Hazard map questions
3
Geller 2011 Geller (2011) argued that “all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas,” so a map showing uniform hazard would be preferable to the existing maps. To test this idea, need to measure map performance
4
How good a baseball player was Babe Ruth? The answer depends on the metric used. In many seasons Ruth led the league in both home runs and in the number of times he struck out. By one metric he did very well, and by another, very poorly.
5
Lessons from meteorology Weather forecasts are routinely evaluated to assess how well their predictions matched what actually occurred: "it is difficult to establish well-defined goals for any project designed to enhance forecasting performance without an unambiguous definition of what constitutes a good forecast." (Murphy, 1993) Information about how a forecast performs is crucial in determining how best to use it. The better a weather forecast has worked to date, the more we factor it into our daily plans.
6
Chosing appropriate metrics is crucial in assessing performance of forecasts. Silver (2012) shows that TV weather forecasts have a "wet bias" - predicting more rain than actually occurs, probably because they feel that customers accept unexpectedly sunny weather, but are annoyed by unexpected rain.
7
From users’ perspective, what specifically should hazard maps seek to accomplish? How can we measure how well they do it?
8
How to measure map performance? Implicit probabilistic map criterion: after appropriate time predicted shaking exceeded at only a fraction p of sites Define fractional site exceedance metric M0(f,p) = |f – p| where f is fraction of sites exceeding predicted shaking Ideal map has M0 = 0 p% 100-p%
9
Fractional site exceedance is a useful metric but only tells part of the story Both maps are successful, but… This map exposed some sites to much greater shaking than predicted. This situation could reflect faults that had larger earthquakes than assumed. M0=0
10
This map significantly overpredicted shaking, which could arise from overestimating the magnitude of the largest earthquakes. M0=0 Fractional site exceedance is a useful metric but only tells part of the story All these maps are successful, but…
11
A map can be nominally successful by the fractional exceedance metric, but significantly underpredict shaking at many sites and overpredict that at others. Or be nominally unsuccessful by the fractional site exceedance metric, but better predict shaking at most sites. p=0.1 f=0.1 M0=0 p=0.1 f=0.2 M0=0.1
12
Other metrics can provide additional information beyond the fractional site exceedance M0 Squared misfit to the data M1(s,x) = i (x i - s i ) 2 /N measures how well the predicted shaking compares to the highest observed. From a purely seismological view, M1 tells us more than M0 about how well a map performed.
13
Other metrics can provide additional information beyond the fractional site exceedance M0 Because underprediction does potentially more harm than overprediction, we could weight underprediction more heavily. Asymmetric squared misfit M2(s,x) = i w i (x i - s i ) 2 /N with w i = a for (x i - s i ) > 0 and w i = b for (x i - s i ) ≤ 0 More useful for hazard mitigation than M1
14
Other metrics can provide additional information beyond the fractional site exceedance M0 Shaking-weighted asymmetric squared misfit We could use larger weights for areas predicted to be the most hazardous, so the map is judged most on how it does there.
15
Other metrics can provide additional information beyond the fractional site exceedance M0 Exposure-weighted asymmetric squared misfit We could use larger weights for areas with the largest exposure of people or property, so the map is judged most on how it does there.
16
Although no single metric fully characterizes map performance, using several metrics can provide valuable insight for assessing and improving hazard maps
17
Nekrasova et al., 2014 P=59% f=0.25% f= 1.6% Return period T = 2475 yr 217 BC – 2002 AD
18
Misfit reflects problems in the data (most likely), maps, or both. Probabilistic map with 2% probability of exceedance in 50 years (i.e. ground shaking expected at least once in 2475 years) significantly overestimates the shaking reported over a comparable time span (~ 2200 years). Deterministic map, which is not associated to a specific time span, also overestimates reported ground shaking. Historical catalog thought to be incomplete (Stucchi et al., 2004) and may underreport the largest shaking due to space-time sampling bias
19
Options after an earthquake yields shaking larger than anticipated: Either regard the high shaking as a low- probability event allowed by the map Or – as usually done - accept that high shaking was not simply a low- probability event and revise the map
20
No formal or objective criteria are used to decide whether to change map & how Done via BOGSAT (“Bunch Of Guys Sitting Around Table”) Challenge: a new map that better describes the past may or may not better predict the future
21
Deciding whether to remake a map is like deciding after a coin has come up heads a number of times whether to continue assuming that the coin is fair and the run is a low-probability event, or to change to a model in which the coin is assumed to be biased. Changing the model to match past may describe future worse ?
22
Bayes’ Rule – how much to change depends on one’s confidence in prior model Revised probability model = Likelihood of observations given the prior model x Prior probability model If you were confident that the coin was fair, you would probably not change your model. If you were given the coin at a magic show, your confidence would be lower and you would be more likely to change your model. ?
23
Assume Poisson earthquake recurrence with λ = 1/T = 1/50 = 0.02 years This estimate is assumed (prior) to have mean μ and standard deviation σ If earthquake occurs after only 1 year The updated forecast, described by the posterior mean, increasingly differs from the initial forecast (prior mean) when the uncertainty in the prior distribution is larger. The less confidence we have in the prior model, the more a new datum can change it. High confidence, Small change Low confidence, Large change Prior
24
We need We need agreed ways of assessing how well hazard maps performed and thus whether one map performed better than another. Although no single metric alone fully characterizes map behavior, using several metrics can provide useful insight for assessing map performance and thus uncertainty. and Deciding when and how to revise hazard maps should combine BOGSAT – subjective judgment given limited information - and Bayes – ideas about parameter uncertainty. Uncertainty estimates are crucial for how much confidence to have in maps for very expensive policy decisions. Current status
25
Australia U.S. Social Security IPCC Global warming Forecast uncertainty for policy decisions
26
Challenge U.S. meteorologists (Hirschberg et al., 2011) have adopted a goal of “routinely providing the nation with comprehensive, skillful, reliable, sharp, and useful information about the uncertainty of hydrometeorological forecasts.” Although seismologists have a tougher challenge and a longer way to go, we should try to do the same for earthquake hazards.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.