Download presentation
Presentation is loading. Please wait.
1
Seismology Part X: Interpretation of Seismograms
2
Seismogram Interpretation How do we interpret the information we get from a recording of ground motion? What kinds of information can we extract? How do we figure out what phases we are looking at? These guys look pretty random!
3
Until we have a model, we have to look at many seismograms and look for coherent energy.
4
Here’s another example from an IRIS poster. This is essentially how the first models of the Earth’s interior were generated by Harold Jeffreys and Keith Bullen Harold Jeffreys (1891-1989)
5
Once we identy phases, we can plot them up on a T- graph and see if we can match their arrival times with a simple model. It turns out we can explain most of what we see with a very simple crust-mantle- core 1D structure!
6
The simple 1D model is shown here, along with the basics of seismic phase nomencalture
7
Local Body Wave Phases Direct Waves: P & S Short distance: Pg (granite) Critically refracted (head): Pn (moho), P* (Conrad; sometimes Pb – basaltic) Reflected from Moho: PmP (and PmS, etc.)
8
Basics of Nomencalture Reflected close to source (depth phase): pP, pS, sP, sS Reflected at a distance: PP &SS (and additional multiples) Reflected from the CMB: c, like PcP or ScS Outer Core P wave: K (Kernwellen -> German for Core) Inner Core P wave: I Inner Core S wave: J Reflection at Outer-Inner Core boundary: i (like PKiKP) Ocean wave in SOFAR Channel: T
9
Example of T phase generation in the SOFAR channel
10
Paths of P waves in the Earth. Notice the refractions at the CMB that cause a P shadow zone.
11
Penetration of the Shadow zone by Refractions from the Inner Core
12
Details of the Current 1D model of the Earth. Note phase transitions in the upper mantle
13
Interpretation of travel time curves (T vs ) Locating Earthquakes: 1.Triangulation 2.General Inverse problem (below) 3.Grid search Tomography: Explaining the remainder with wavespeed variations
14
Waveforms: Source processes and details of structure.
15
How to model just about anything In general, we can formulate a relationship between model m and observables d as: g(m) = d
16
we can solve the above by taking a guess m o of m, and expanding g(m) about this guess: Then or G m = R where G is a matrix containing the partial derivatives. G is an m x n matrix, where m is the number of observations (rows) and n is the number of variables (columns). R is a m x 1 vector or residuals. The idea is to solve the above for m, add it to m o, and then repeat the operation as often as is deemed (statistically) useful.
17
How to do this? Let’s from the matrix S from G and it’s complex conjugate transpose as: Note that S is a square matrix (N + M) x (N + M) and also S’ = S, which means that there exists an orthogonal set of eigenvectors w and real eigenvalues such that We solve this eigenvalue problem by noting that non trivial solutions to
18
will occur only if the determinate of the matrix in the parentheses vanishes: in general there will be (N+M) eigenvalues. Now, each eigenvector w will have N+M components, and it will be useful to consider w as composed of an N dimensional vector v and an M dimensional vector u: Thus, we arrive at the coupled equations:
19
Note that we can change the sign of and have (-u i, v i ) be a solution as well. Thus, there are p pairs of nonzero eigenvalues +-. Let’s suppose there are p pairs of nonzero eigenvalues for the above equations. For the remaining zero eigenvalues, we have Or, in other words, u and v are independent. Now, note that in the non-zero case: GG T and G T G are both Hermitian, so each of V and U forms an orthogonal set of eigenvectors with real eigenvalues. After normalization, we can the write:
20
V T V = VV T = I U T U = UU T = I Where V is an N x N matrix of the v eigenvectors, and U is an M x M matrix of the u eigenvectors. We say that U spans the data space, and V spans the model space. In the case of zero eigenvalues, we can divide the U and V matrices up into “p” space and “o” space: U = (U p,U o ) V = (V p,V o ) Because of orthogonality, U p T U p = I V p T V p = I
21
BUT U p U p T != I V p V p T != I In this case, we write: or, in sum
22
since VV T = I, we have This is called the Singular Value Decomposition of G, and it shows that we can reconstruct G just using the "p" part. Uo and Vo are "blind spots" not illuminated by G. NB: for overdetermined problems, be can further subdivide U into a U1 and U2 space, where U2 has extra information not required by the problem at hand. This can be very useful as a null operator for some problems.
23
Since U o T Gm = 0 Then the prediction (Gm=d) will have no component in Uo space, only Up space. If the data have some components in Uo space, then there is no way that they can be explained by G! It is thus a source of discrepancy between d and Gm.
24
The Generalized Inverse Note that if there are no zero singular values, then we can write down the inverse of G immediately as: G -1 = V -1 U T Since G -1 G = V -1 U T U V T = V -1 V T = VV T = I In the case of zero singular values, we can try using the p space only, in which case we have the generalized inverse operator: G g -1 = V p p -1 U p T Let's see what happens when we apply this
25
Suppose that there is a U o space but no V o space. In this case G T G = V p p U p T U p p V p T = V p p 2 V p T has an exact inverse: (G T G) -1 = V p p -2 V p T Then we can write: G T Gm = G T d and m = (G T G) -1 G T d = V p p -2 V p T V p p U p T d = V p p -1 U p T d = G g -1 d
26
Note that the generalized inverse gives the least squares solution, defined as the minimum of the squares of the residuals: min|d - Gm| 2 Let m g = G g -1 d. Then d - G m g = d - U p p V p T V p p -1 U p T d = d - U p U p T d Thus U p T (d - G m g ) = U p T (d - U p U p T d) = U p T d - U p T d = 0 Which means that there is no component of the residual vector in U p space.
27
Also, U o T G m g = 0 And so there is no component of the predicted data in U o space. Imagine the total data space spanned by U o and U p. The generalized inverse explains all the data that project onto U p, and the residual (d - Gm g ) projects onto U o. What if there is not U o but V o space exists? Let's try our m g solution again and see what happens. m g = G g -1 d so Gm g = GG g -1 d = U p p V p T V p p -1 U p T d = U p U p T d = d
28
So, the solution m g will satisfy d with a model restricted to V p space. We are free to add on any components of V o space we like to m g to generate a model m - these will have no effect on the fit to the data. Note however that any such addition will mean a larger net change in m. Thus m g is our minimum step solution out of all possible solutions (cf. Occam's razor!). Finally, when there are both V o and U o spaces, the generalized inverse operator will both minimize the model step and the residual. Pretty good, eh?
29
Resolution and Error Let's compare the "real" solution m to our generalized inverse estimate of it: m g. Recall that m g = G g -1 d Gm = d So m g = G g -1 Gm = V p p -1 U p T U p p V p T m = V p V p T m Thus, if there is no V o space, V p V p T = I and m g and m are the same. In the event of V o space then m g will be a smoothed version of m. We call the product V p V p T the Model Resolution Matrix.
30
How about in data space? In this case, we compare the actual data "d" with our predicted data d g : m g = G g -1 d d g = Gm g = GG g -1 d = U p p V p T V p p -1 U p T d = U p U p T d Thus, in the absence of U o space, the prediction matches the observed. If U o exists, then d g is a weighted average of d and a discrepancy exists. In this case, we say that parts of d cannot be fit with the model; only certain combinations of d can be fit.
31
Finally, we can estimate how data uncertainties translate into model uncertainties: m g = G g -1 d We write the covariance of these terms as: m g m g T > = G g -1 G g T-1 If all the uncertainties in d are statistically independant and equal to a constant variance d 2, then m g m g T > = d 2 G g -1 G g T-1 = d 2 V p p -1 U p T U p p -1 V p T = d 2 V p p -2 V p T
32
Note from above that (G T G) -1 = V p p -2 V p T and so m g m g T > = d 2 (G T G) -1 Note that the covariance goes up as the singular values get small, so including the associated eigenvectors could be very unstable. There are a couple of ways around this. One is just not to use them (i.e., enforce a cutoff below which a singular value is defined to be zero). The other is damping. To talk about damping quantitatively we need to do some more statistics.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.