Comments on UCERF 3 Art Frankel USGS For Workshop on Use of UCERF3 in the National Seismic Hazard Maps Oct. 17-18, 2012.

Slides:



Advertisements
Similar presentations
UCERF2 Deformation Model Can we do better in UCERF3? -Large number (> 50%?) of faults in UCERF2 model characterized as having poorly-constrained, unconstrained,
Advertisements

Review of Catalogs and Rate Determination in UCERF2 and Plans for UCERF3 Andy Michael.
Discovering Cyclic Causal Models by Independent Components Analysis Gustavo Lacerda Peter Spirtes Joseph Ramsey Patrik O. Hoyer.
Earthquake recurrence models Are earthquakes random in space and time? We know where the faults are based on the geology and geomorphology Segmentation.
2/21/ USGS NSHMP CA Workshop II1 UCERF3.2: Hazard Implications Hazard comparison metrics Inversion testing –Convergence and eqn. set weighting.
16/9/2011UCERF3 / EQ Simulators Workshop RSQSim Jim Dieterich Keith Richards-Dinger UC Riverside Funding: USGS NEHRP SCEC.
SPATIAL CORRELATION OF SPECTRAL ACCELERATIONS Paolo Bazzurro, Jaesung Park and Nimal Jayaram 1.
The trouble with segmentation David D. Jackson, UCLA Yan Y. Kagan, UCLA Natanya Black, UCLA.
Faults in Focus: Earthquake Science Accomplishments Thomas H. Jordan Director, Southern California Earthquake Cente r 28 February 2014.
Confidence Intervals for Proportions
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 18, Slide 1 Chapter 18 Confidence Intervals for Proportions.
USING AND PROMOTING REFLECTIVE JUDGMENT AS STUDENT LEADERS ON CAMPUS Patricia M. King, Professor Higher Education, University of Michigan.
Planning under Uncertainty
Prague, March 18, 2005Antonio Emolo1 Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Integrating Probabilistic and Deterministic Approaches.
Deterministic Seismic Hazard Analysis Earliest approach taken to seismic hazard analysis Originated in nuclear power industry applications Still used for.
Earthquake Probabilities in the San Francisco Bay Region, 2002–2031 Working Group on California Earthquake Probabilities, 2002 Chapters 1 & 2.
NEW MADRID: A dying fault? GPS seismology geology Heat flow Recent data, taken together, suggest that the New Madrid seismic zone may be shutting down.
Earthquake Probabilities for the San Francisco Bay Region Working Group 2002: Chapter 6 Ved Lekic EQW, April 6, 2007 Working Group 2002: Chapter.
A New Approach To Paleoseismic Event Correlation Glenn Biasi and Ray Weldon University of Nevada Reno Acknowledgments: Tom Fumal, Kate Scharer, SCEC and.
Simulation.
Chapter 4: The SFBR Earthquake Source Model: Magnitude and Long-Term Rates Ahyi Kim 2/23/07 EQW.
Chapter 5: Calculating Earthquake Probabilities for the SFBR Mei Xue EQW March 16.
Time-dependent seismic hazard maps for the New Madrid seismic zone and Charleston, South Carolina areas James Hebden Seth Stein Department of Earth and.
The Calibration Process
Data Acquisition Chapter 2. Data Acquisition 1 st step: get data 1 st step: get data – Usually data gathered by some geophysical device – Most surveys.
The Empirical Model Karen Felzer USGS Pasadena. A low modern/historical seismicity rate has long been recognized in the San Francisco Bay Area Stein 1999.
NA-Pa Plate Boundary Wilson [1960] USGS Prof. Paper 1515.
If we build an ETAS model based primarily on information from smaller earthquakes, will it work for forecasting the larger (M≥6.5) potentially damaging.
Paleoseismic and Geologic Data for Earthquake Simulations Lisa B. Grant and Miryha M. Gould.
Seismic Hazard Assessment for the Kingdom of Saudi Arabia
Genetic Regulatory Network Inference Russell Schwartz Department of Biological Sciences Carnegie Mellon University.
Updating Models of Earthquake Recurrence and Rupture Geometry of the Cascadia Subduction Zone for UCERF3 and the National Seismic Hazard Maps Art Frankel.
Review of the Grand Inversion UCERF3.1 Tom Jordan 18 December 2012.
Since Rich wanted some relatively quick info on what detector might be needed to help MuTr pattern recognition, I did a scan on a central HIJING file I.
National Seismic Hazard Maps and Uniform California Earthquake Rupture Forecast 1.0 National Seismic Hazard Mapping Project (Golden, CO) California Geological.
Research opportunities using IRIS and other seismic data resources John Taber, Incorporated Research Institutions for Seismology Michael Wysession, Washington.
A functional form for the spatial distribution of aftershocks Karen Felzer USGS Pasadena.
Using IRIS and other seismic data resources in the classroom John Taber, Incorporated Research Institutions for Seismology.
Earthquake hazard isn’t a physical thing we measure. It's something mapmakers define and then use computer programs to predict. To decide how much to believe.
Blue – comp red - ext. blue – comp red - ext blue – comp red - ext.
Copyright © 2012 Pearson Education. All rights reserved © 2010 Pearson Education Copyright © 2012 Pearson Education. All rights reserved. Chapter.
Quantifying and characterizing crustal deformation The geometric moment Brittle strain The usefulness of the scaling laws.
Jayne Bormann and Bill Hammond sent two velocity fields on a uniform grid constructed from their test exercise using CMM4. Hammond ’ s code.
Yuehua Zeng & Wayne Thatcher U. S. Geological Survey
16/9/2011UCERF3 / EQ Simulators Workshop ALLCAL Steven N. Ward University of California Santa Cruz.
Some General Implications of Results Because hazard estimates at a point are often dominated by one or a few faults, an important metric is the participation.
112/16/2010AGU Annual Fall Meeting - NG44a-08 Terry Tullis Michael Barall Steve Ward John Rundle Don Turcotte Louise Kellogg Burak Yikilmaz Eric Heien.
1 Ivan Wong Principal Seismologist/Vice President Seismic Hazards Group, URS Corporation Oakland, CA Uncertainties in Characterizing the Cascadia Subduction.
The repetition of large earthquakes, with similar coseismic offsets along the Carrizo segment of San Andreas fault has been documented using geomorphic.
A GPS-based view of New Madrid earthquake hazard Seth Stein, Northwestern University Uncertainties permit wide range (3X) of hazard models, some higher.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
The Snowball Effect: Statistical Evidence that Big Earthquakes are Rapid Cascades of Small Aftershocks Karen Felzer U.S. Geological Survey.
A proposed triggering/clustering model for the current WGCEP Karen Felzer USGS, Pasadena Seismogram from Peng et al., in press.
California Earthquake Rupture Model Satisfying Accepted Scaling Laws (SCEC 2010, 1-129) David Jackson, Yan Kagan and Qi Wang Department of Earth and Space.
Fault Segmentation: User Perspective Norm Abrahamson PG&E March 16, 2006.
9. As hazardous as California? USGS/FEMA: Buildings should be built to same standards How can we evaluate this argument? Frankel et al., 1996.
On constraining dynamic parameters from finite-source rupture models of past earthquakes Mathieu Causse (ISTerre) Luis Dalguer (ETHZ) and Martin Mai (KAUST)
Comments on physical simulator models
Plate tectonics: Quantifying and characterizing crustal deformation
Kinematic Modeling of the Denali Earthquake
Some issues/limitations with current UCERF approach
The Calibration Process
Chapter 6 Calibration and Application Process
Better Characterizing Uncertainty in Geologic Paleoflood Analyses
CJT 765: Structural Equation Modeling
Philip J. Maechling (SCEC) September 13, 2015
Tectonics V: Quantifying and characterizing crustal deformation
Hidden Markov Models Part 2: Algorithms
Deterministic Seismic Hazard Analysis
Probabilistic Seismic Hazard Analysis
Presentation transcript:

Comments on UCERF 3 Art Frankel USGS For Workshop on Use of UCERF3 in the National Seismic Hazard Maps Oct , 2012

2,600 sub-sections, s 34 paleoseismic rates (recently added slip per event constraint for some Sites) 220,000 rupture scenarios, r solving for rate of each one number of equations: About 25 magnitude bins (assuming dmag = 0.1) Have recently added about 65,000 spatial smoothing equations (2600 sub-segments and 25 mag bins) New Fault MFD constraint adds About 300 x 25 = 7500 equations

Grand Inversion solves for 220,000 unknowns (rates of every possible combination of sub-segments) using less than 2,200 independent data points (230 geologic slip rates and 34 paleoseismic recurrence rates; plus geodetic data which is used to estimate slip rates on 1,877 fault segments [see spreadsheets], but most of these are not independent). You are trying to solve for 220,000 unknowns using about 2,630 equations with data (2,600 sub-segment slip rates + 34 paleoseismic rates). You also have m equations for GR b=1 constraint, where m is number of magnitude bins (about 25?). Recently-added spatial smoothing constraint would give about 65,000 additional equations (2600 sub-segments x 25 magnitude bins). Recently added MFD constraint on B-type faults (300 faults x 25 magnitude bins)

This is a mixed-determined problem (more unknowns than equations) and some rupture rates will have non-unique solutions and cannot be resolved. Which ones? It’s not clear from simulated annealing solutions we’ve seen. There will be correlations between rupture rates on different faults, so treating each rupture rate as independent could lead to a misleading hazard value. Just calculating means from all the runs is not sufficient, because of correlations. How much of the solutions we’ve been shown can we believe? How well are the rates resolved? In a typical inverse problem involving inverting a matrix, one can determine a formal resolution matrix. With simulated annealing, you can’t. Should invert on simulated test cases. Set up rupture model with multiple fault ruptures, specify slip rates and some recurrence rates, add noise, and try inversion to see how many rupture rates you can resolve.

Results depend on choices of spatial smoothing, relative weights of different data sets (slip rates vs. paleoseismic recurrence rates), relative weights of constraints, etc. How do these subjective choices affect the seismic hazard values?

A question about multi-fault rupture M7.0 M7.6 If these faults have the same slip rate, does the GI give the same rate of two-fault rupture with M7.6 as single rupture on large fault? Should it?

Beware of highly-complicated results from a simplified model Slip for large earthquakes with long lengths do not look like rainbows or boxcars. Some portions of rupture near ends have higher slip than rest of fault (e.g., M7.9 Denali earthquake, M7.6 Chi Chi earthquake). This is not included in GI model, so complex results from a simplistic model can be misleading. Especially true of multi-fault ruptures. Ambiguity of rates for M events. Absence of evidence in trench may mean they didn’t occur or could mean they occurred but were not observed. Just multiplying rate by 2 to account for probability of observing is not a unique interpretation

Need to close the loop with geologists: is this MFD plausible given geologic observations? Figure From UCERF3 TR8: all rupture scenarios involving northern segment of Hayward fault (similar to what was shown yesterday) Modal earthquake is M7.5

From WGCEP 1999 North Hayward Segment: modal rupture scenario is single segment M6.6 with rate Of /yr Also: /yr rate of NH + SH rupture With M7.1

Implies that 52% of future M6.5+ events will occur off the faults in the model (from cyan line to black line) This becomes 38% if one includes M6.5’s in fault polygon boxes In UCERF2 we determined that 25% of M6.5+ earthquakes in past 150 yr occurred off faults in model Note that UCERF3 geodetically-derived deformation models find about 30% of total moment is off the faults in model (not using fault polygons) Why should the collective MFD of M earthquakes in polygons around faults be continuous with collective rate of M6.5+ earthquakes on fault traces? See orange line From UCERF3 TR8

issues with determining the target MFD for the Grand Inversion Start with GR b=1 for all of CA, then have to decide how much of this MFD is on faults in model to get the target MFD for the GI So you have to get rid of seismicity that is not on the faults in the model. They choose polygons around faults and use M5+. Assumes that, collectively, rate of earthquakes (M ) in boxes around faults is continuous with rates of M6.5+ earthquakes on fault traces and they form a continuous GR distribution. Is this true? Is the percentage of M6.5+ earthquakes on faults relative to all of CA the same as the percentage of M5’s in fault polygons, relative to all of CA? In UCERF2 we estimated percentage of M6.5+ over past 150 years that were associated with faults in model, using catalog It is partially an epistemic judgment to estimate how complete your fault inventory is, that is, what percentage of future M6.5+ earthquakes will occur on faults in your model. You can use off fault versus on-fault deformation or the earthquake catalog to constrain this, with assumptions It’s always been a problem on how to handle the seam around M6.5 between smoothed seismicity and fault model (stochastic versus deterministic parts of model)

Independent Earthquake Rate Issue Dependent-independent event problem: we need rates of independent events to do PSHA (aftershock hazard could be included in a separate calculation). The GI provides total rates, with aftershocks. a M6.5 earthquake on the Hayward fault is less likely to be aftershock of modal earthquake on that fault than a M6.5 earthquake on San Andreas fault; how do rates of M6.5 earthquakes compare to expected rates of aftershocks of M>= 7.5 events? For example, say the GI finds approximately equal rates of M6.5 and M7.5 earthquakes on the SAF: if each M7.5 had one M6.5 aftershock, the entire rate of M6.5 earthquakes could represent aftershocks. One should not assume that the ground-shaking probabilities for the M6.5’s and M7.5’s are independent, which is the assumption for standard PSHA.

From UCERF3 TR8

Proposed UCERF3 approach to removing aftershocks is not similar to how it was done in UCERF2, which reduced the moment rates by 10%, assuming these were aftershocks or small events on adjacent faults, in effect lowering the rates for all magnitudes by the same factor.

How do predicted rates of M6.5 earthquakes on individual faults compare to observed rates? e.g., SAF. If inversion predicts many more M6.5’s than observed, then there is a problem. Need to see MFD’s for each fault. Forward problem (essentially used in UCERF1 and 2) directly used the measured mean values of slip rates and paleoseismic recurrence rates, not the values from an inversion that approximately fits these observations; Used expert opinion on fault segmentation and multi-segment ruptures (A faults) and used plausible characteristic and GR models for B faults. Now the inversion for 220,000 rupture rates has supplanted the judgment of geologists who have studied these faults UCERF3 process inherently prefers inversion to expert opinion; now it’s a smaller group of experts who are familiar with GI. Also, fewer geologic experts involved compared to S.F. Bay area study (WGCEP, 1999).

From UCERF3 TR8 Looks like Central Valley hazard could be much higher from Deformation Model

Other issues  Adding faults with slip rates inferred from recency of activity or slip-rate class. Violates past principle of NSHM’s that we only use faults that have some direct, measurement of slip rate or paleoearthquake rate (full disclosure: in NV we used slip rates based on geomorphic expression).  Assigning fixed weights to geodetic models that have regionally-varying uncertainties. Weighting of models should vary spatially with relative quality of geologic and geodetic information (why we used geodetic info in Puget Sound, eastern CA- western NV)

Using geodetic models will make substantial changes to hazard maps, even for areas with well determined geologic slip rates (e.g., north coast portion of SAF)

GR b=1.0 ?

From UCERF 2 report (Field et al., 2009); independent events Does GR b=0.8 fit observed MFD’s for northern and southern CA (for independent events)? b=0.6 for southern CA M b= 0.62 from M

Hutton et al. (2010) for southern CA

The GR relation with b=1.0 What’s the physical cause? Lots of reasons presented. Especially interesting that the slope doesn’t change as rupture widths go from part of seismogenic thickness to equal or more than this thickness In inversion, the rates on widely-separated faults are dependent on each other. Why should the earthquake rate of the Death Valley fault be affected by the earthquake rate of the Little Salmon fault in NW CA? There are now correlations between distant faults.

Do we think the map on the right represents a good picture of where the M6 off-fault earthquakes are most likely to occur in the next years?

My personal view Grand Inversion is a promising research tool. It will take time to evaluate its resolution and non-uniqueness. It will have to be reproducible by others. There are important issues on the target MFD for the GI and the aftershock problem. Is the GI suitable for policy such as building codes and insurance? Two tracks: policy track and research track that intersect at times. When will the GI be ready for prime time? This update cycle? UCERF3 will change the hazard substantially at some locations. Why? Justifying changes in hazard maps: “The inversion made me do it” rather than “the geologic experts who have studied these faults made me do it” Using the slip rates from the geodetic models will change the hazard. That doesn’t mean we shouldn’t use them. Imperative that UCERF3 delivers a set of rupture scenarios and rates that can be input into the FORTRAN codes of the NSHMP. This has to be integrated into the national seismic hazard maps and associated products such as deaggs. Is there a way to refine UCERF2 to include new geologic and geodetic data and include multi-fault ruptures? Can UCERF3 inversion be constrained to be close as possible to UCERF2?