Presentation is loading. Please wait.

Presentation is loading. Please wait.

Coupling the Imaging & Inversion Tasks:

Similar presentations


Presentation on theme: "Coupling the Imaging & Inversion Tasks:"— Presentation transcript:

1 Coupling the Imaging & Inversion Tasks:
…some simple insights into the theory and numerics of the inverse scattering series Kristopher Innanen†, Bogdan Nita††, Tad Ulrych† and Arthur Weglein†† ††University of Houston, † University of British Columbia M-OSRP Annual Meeting, University of Houston March 31 – April 1, 2004

2 Acknowledgments CDSST (UBC) sponsors and members Simon Shaw Ken Matson
M-OSRP sponsors and members CDSST (UBC) sponsors and members Simon Shaw Ken Matson

3 Motivations Thus far we have worked on algorithms and interpretations that involve the separation of the tasks of imaging and inversion. However: 1. We remain interested in the workings of the inverse scattering series terms (and others) whether separated or not. 2. Questions of model [in]dependence may involve coupling tasks: e.g. locating R in depth. 3. Patterns in the derivation of the terms of the inverse scattering series imply a connectivity between the mechanisms that accomplish the two tasks. 4. Some scattered questions: e.g. what is the meaning of a truncation of terms prior to numerical convergence?

4 Motivations Thus far we have worked on algorithms and interpretations that involve the separation of the tasks of imaging and inversion. However: 1. We remain interested in the workings of the inverse scattering series terms (and others) whether separated or not. 2. Questions of model [in]dependence may involve coupling tasks: e.g. locating R in depth. 3. Patterns in the derivation of the terms of the inverse scattering series imply a connectivity between the mechanisms that accomplish the two tasks. 4. Some scattered questions: e.g. what is the meaning of a truncation of terms prior to numerical convergence?

5 Motivations Thus far we have worked on algorithms and interpretations that involve the separation of the tasks of imaging and inversion. However: 1. We remain interested in the workings of the inverse scattering series terms (and others) whether separated or not. 2. Questions of model [in]dependence may involve coupling tasks: e.g. locating R in depth. 3. Patterns in the derivation of the terms of the inverse scattering series imply a connectivity between the mechanisms that accomplish the two tasks. 4. Some scattered questions: e.g. what is the meaning of a truncation of terms prior to numerical convergence?

6 Motivations Thus far we have worked on algorithms and interpretations that involve the separation of the tasks of imaging and inversion. However: 1. We remain interested in the workings of the inverse scattering series terms (and others) whether separated or not. 2. Questions of model [in]dependence may involve coupling tasks: e.g. locating R in depth. 3. Patterns in the derivation of the terms of the inverse scattering series imply a connectivity between the mechanisms that accomplish the two tasks. 4. Some scattered questions: e.g. what is the meaning of a truncation of terms prior to numerical convergence?

7 Motivations Thus far we have worked on algorithms and interpretations that involve the separation of the tasks of imaging and inversion. However: 1. We remain interested in the workings of the inverse scattering series terms (and others) whether separated or not. 2. Questions of model [in]dependence may involve coupling tasks: e.g. locating R in depth. 3. Patterns in the derivation of the terms of the inverse scattering series imply a connectivity between the mechanisms that accomplish the two tasks. 4. Some scattered questions: e.g. what is the meaning of a truncation of terms prior to numerical convergence?

8 Key references Imaging & inversion subseries. Weglein et al. (2002)
Shaw et al. (2003) Zhang and Weglein (2003) …and a lot of this year’s report. Useful tools from linear inverse theory. Walker and Ulrych (1983) Oldenburg et al. (1983) Hansen (1999) (singular value decomposition)

9 Background and review A task-separated form of the inverse scattering series was developed using an “integrate by parts” mentality that diagrammatically appears to distinguish between the geometry of scattering interactions; esp. • separated vs. self-interaction • relative geometry. We will consider a “coupled” version of these terms, but one that maintains the mechanisms of uncoupling very close to its heart.

10 Background and review The original casting, to third order, looks like this: (MOSRP02 and earlier notes)

11 Background and review The original casting, to third order, looks like this: 1st order 2nd order 3rd order

12 Background and review …which has an associated inversion subseries:

13 Background and review …and a leading order imaging subseries:

14 Background and review Leading order imaging subseries:
Analysis indicates that this corresponds to a stretch of 1 spatially. It corrects reflector locations, and has no interest in changing the linear amplitudes…

15 Background and review Inversion subseries:
Has no ability to alter anything except parameter amplitudes: INVERSION (…framework for the study of parameter correction via INTER-EVENT COMMUNICATION)

16 Re-coupling the tasks Many of the terms in the ISS seem to follow patterns – similar operations are carried out repeatedly at varying orders:

17 Re-coupling the tasks So consider the following construction: where

18 Re-coupling the tasks So consider the following construction:
Try expanding it over several orders…

19 Re-coupling the tasks Expand SII over 3 orders:
As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken… So the terms associated with the formula: (1) Reproduce the leading order imaging subseries (LOIS) (2) Approximate the inversion subseries

20 Re-coupling the tasks Expand SII over 3 orders:
As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken… So the terms associated with the formula: (1) Reproduce the leading order imaging subseries (LOIS) (2) Approximate the inversion subseries

21 Re-coupling the tasks Expand SII over 3 orders:
As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken… So the terms associated with the formula: (1) Reproduce the leading order imaging subseries (LOIS) (2) Approximate the inversion subseries

22 Re-coupling the tasks So consider the following construction:
This expression appears to be intimately associated with imaging and inversion. See this by expanding it over several orders…

23 Re-coupling the tasks Some Basic Questions
1. Does it have a closed form? 2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

24 Re-coupling the tasks Some Basic Questions
1. Does it have a closed form? 2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

25 Re-coupling the tasks Closed form:
As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken… It seems that we may compute this quantity by taking the inverse Fourier transform of a forward, Fourier-like, transform, of the linear input; the kernel of the forward transform is dependent on the second integral of the data.

26 Re-coupling the tasks Some Basic Questions
1. Does it have a closed form? 2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

27 Re-coupling the tasks Look more closely at the detail of SII: how does it image and invert simultaneously? ? The final thing I want to show is how, in a little more detail, precisely this one formula simultaneously carries out the quite different tasks of imaging and inversion. I’d like to do this from what you might call a signal processing perspective, that is understand it from a look at how it reacts to portions of the input signal. As I’ve said, this is essentially based on the n’th derivative of the n’th power of this object. Let’s look at the object for starters… it is the integral of the Born approximation. What can we say about it?

28 Re-coupling the tasks Let H(z) be the integral of 1(z) – for a piecewise constant Born approx., H is piecewise linear… Well again we can at least make some general statements about it in this 1D normal incidence framework – if we have steplike perturbations making up the model, then obviously the integral of the Born approximation will be piecewise linear. Let’s call this piecewise linear function H. The idea is to figure out what the engine – the n’th derivative of the n’th power – does to this H.

29 Re-coupling the tasks Start by considering H(z) away from its characteristic discontinuities: i.e. linear… then, what does Kn dn/dzn Hn(z) accomplish? n=1 a n=2 -½ a2 n=3 ¼ a3 No z-dependence! …away from discontinuities the “engine” Kn dn/dzn Hn(z) attempts no structural change to a piecewise constant quantity. REPRODUCES THE INVERSION SUBSERIES. Let’s start by considering the behaviour of this function away from its discontinuities. Away from these edges it will go as some az+b; what happens if we operate on az+b with an n’th power followed by an n’th derivative? What happens is – for this low order polynomial – the two processes almost cancel each other out: the output is a constant, regardless of n. Here from n=1 through 3 we see that the output is constant and depends on the slope to the n’th power. So the output away from the discontinuities is rather gentle, no z-dependence is produced, only the amplitude is changed via this constant. In fact you can show that our approximation of the inversion subseries (the weighted exponentiation of the Born amplitude) is re-created. But it is not too hard to convince yourself that this “gentle” behaviour, what amounts to weighted transformations of a linear function to a constant function, is particular only to input that is well-approximated by low order polynomials, here a linear function. The thing is actually an edge detector, and it will go crazy when it runs into something more closely resembling a high order polynomial, and especially a discontinuity. Let’s see what happens then. Let’s design a sort of generic discontinuity that conjoins two linear functions, and then put this guy through the SII engine: here are four cases, take the rooftop function to the n’th power and then take the n’th derivative, here for n=1, 2, 3, then 4. What you get out of it are what closely resemble numeric derivative operators, whose amplitude has been dictated by the increasing disparity in slope associated with the exponentiation of this rooftop function! But in general, near the discontinuities we wind up with a series of weighted derivative operators, in other words, the presence of rapid change in the input signal activates the SII engine and makes it behave like the imaging subseries. So it is kind of a beautiful flexibility of this operator that allows it to invert and image at the same time – it detects how rapidly the input is changing as it encounters it, and behaves accordingly.

30 Re-coupling the tasks Start by considering H(z) away from its characteristic discontinuities: i.e. linear… then, what does Kn dn/dzn Hn(z) accomplish? n=1 a n=2 -½ a2 n=3 ¼ a3 No z-dependence! …away from discontinuities the “engine” Kn dn/dzn Hn(z) attempts no structural change to a piecewise constant quantity. REPRODUCES THE INVERSION SUBSERIES. Let’s start by considering the behaviour of this function away from its discontinuities. Away from these edges it will go as some az+b; what happens if we operate on az+b with an n’th power followed by an n’th derivative? What happens is – for this low order polynomial – the two processes almost cancel each other out: the output is a constant, regardless of n. Here from n=1 through 3 we see that the output is constant and depends on the slope to the n’th power. So the output away from the discontinuities is rather gentle, no z-dependence is produced, only the amplitude is changed via this constant. In fact you can show that our approximation of the inversion subseries (the weighted exponentiation of the Born amplitude) is re-created. But it is not too hard to convince yourself that this “gentle” behaviour, what amounts to weighted transformations of a linear function to a constant function, is particular only to input that is well-approximated by low order polynomials, here a linear function. The thing is actually an edge detector, and it will go crazy when it runs into something more closely resembling a high order polynomial, and especially a discontinuity. Let’s see what happens then. Let’s design a sort of generic discontinuity that conjoins two linear functions, and then put this guy through the SII engine: here are four cases, take the rooftop function to the n’th power and then take the n’th derivative, here for n=1, 2, 3, then 4. What you get out of it are what closely resemble numeric derivative operators, whose amplitude has been dictated by the increasing disparity in slope associated with the exponentiation of this rooftop function! But in general, near the discontinuities we wind up with a series of weighted derivative operators, in other words, the presence of rapid change in the input signal activates the SII engine and makes it behave like the imaging subseries. So it is kind of a beautiful flexibility of this operator that allows it to invert and image at the same time – it detects how rapidly the input is changing as it encounters it, and behaves accordingly.

31 Re-coupling the tasks Next consider H(z) near its discontinuities. Focus on a general piecewise linear signal portion: Numerically the effect is of a weighted set of derivative operators – construction of the discontinuous 1 correction. IMAGING… Let’s start by considering the behaviour of this function away from its discontinuities. Away from these edges it will go as some az+b; what happens if we operate on az+b with an n’th power followed by an n’th derivative? What happens is – for this low order polynomial – the two processes almost cancel each other out: the output is a constant, regardless of n. Here from n=1 through 3 we see that the output is constant and depends on the slope to the n’th power. So the output away from the discontinuities is rather gentle, no z-dependence is produced, only the amplitude is changed via this constant. In fact you can show that our approximation of the inversion subseries (the weighted exponentiation of the Born amplitude) is re-created. But it is not too hard to convince yourself that this “gentle” behaviour, what amounts to weighted transformations of a linear function to a constant function, is particular only to input that is well-approximated by low order polynomials, here a linear function. The thing is actually an edge detector, and it will go crazy when it runs into something more closely resembling a high order polynomial, and especially a discontinuity. Let’s see what happens then. Let’s design a sort of generic discontinuity that conjoins two linear functions, and then put this guy through the SII engine: here are four cases, take the rooftop function to the n’th power and then take the n’th derivative, here for n=1, 2, 3, then 4. What you get out of it are what closely resemble numeric derivative operators, whose amplitude has been dictated by the increasing disparity in slope associated with the exponentiation of this rooftop function! But in general, near the discontinuities we wind up with a series of weighted derivative operators, in other words, the presence of rapid change in the input signal activates the SII engine and makes it behave like the imaging subseries. So it is kind of a beautiful flexibility of this operator that allows it to invert and image at the same time – it detects how rapidly the input is changing as it encounters it, and behaves accordingly.

32 Re-coupling the tasks Next consider H(z) near its discontinuities. Focus on a general piecewise linear signal portion: Numerically the effect is of a weighted set of derivative operators – construction of the discontinuous 1 correction. IMAGING… Let’s start by considering the behaviour of this function away from its discontinuities. Away from these edges it will go as some az+b; what happens if we operate on az+b with an n’th power followed by an n’th derivative? What happens is – for this low order polynomial – the two processes almost cancel each other out: the output is a constant, regardless of n. Here from n=1 through 3 we see that the output is constant and depends on the slope to the n’th power. So the output away from the discontinuities is rather gentle, no z-dependence is produced, only the amplitude is changed via this constant. In fact you can show that our approximation of the inversion subseries (the weighted exponentiation of the Born amplitude) is re-created. But it is not too hard to convince yourself that this “gentle” behaviour, what amounts to weighted transformations of a linear function to a constant function, is particular only to input that is well-approximated by low order polynomials, here a linear function. The thing is actually an edge detector, and it will go crazy when it runs into something more closely resembling a high order polynomial, and especially a discontinuity. Let’s see what happens then. Let’s design a sort of generic discontinuity that conjoins two linear functions, and then put this guy through the SII engine: here are four cases, take the rooftop function to the n’th power and then take the n’th derivative, here for n=1, 2, 3, then 4. What you get out of it are what closely resemble numeric derivative operators, whose amplitude has been dictated by the increasing disparity in slope associated with the exponentiation of this rooftop function! But in general, near the discontinuities we wind up with a series of weighted derivative operators, in other words, the presence of rapid change in the input signal activates the SII engine and makes it behave like the imaging subseries. So it is kind of a beautiful flexibility of this operator that allows it to invert and image at the same time – it detects how rapidly the input is changing as it encounters it, and behaves accordingly.

33 Re-coupling the tasks So from a signal-processing point of view:
1. The simultaneous imaging and inversion subseries acts as a flexible operator that behaves very differently depending on the input 2. In slowly-varying regions it acts to change the amplitudes: inversion 3. At discontinuities it outputs weighted derivatives of the integral of the data: imaging Let’s start by considering the behaviour of this function away from its discontinuities. Away from these edges it will go as some az+b; what happens if we operate on az+b with an n’th power followed by an n’th derivative? What happens is – for this low order polynomial – the two processes almost cancel each other out: the output is a constant, regardless of n. Here from n=1 through 3 we see that the output is constant and depends on the slope to the n’th power. So the output away from the discontinuities is rather gentle, no z-dependence is produced, only the amplitude is changed via this constant. In fact you can show that our approximation of the inversion subseries (the weighted exponentiation of the Born amplitude) is re-created. But it is not too hard to convince yourself that this “gentle” behaviour, what amounts to weighted transformations of a linear function to a constant function, is particular only to input that is well-approximated by low order polynomials, here a linear function. The thing is actually an edge detector, and it will go crazy when it runs into something more closely resembling a high order polynomial, and especially a discontinuity. Let’s see what happens then. Let’s design a sort of generic discontinuity that conjoins two linear functions, and then put this guy through the SII engine: here are four cases, take the rooftop function to the n’th power and then take the n’th derivative, here for n=1, 2, 3, then 4. What you get out of it are what closely resemble numeric derivative operators, whose amplitude has been dictated by the increasing disparity in slope associated with the exponentiation of this rooftop function! But in general, near the discontinuities we wind up with a series of weighted derivative operators, in other words, the presence of rapid change in the input signal activates the SII engine and makes it behave like the imaging subseries. So it is kind of a beautiful flexibility of this operator that allows it to invert and image at the same time – it detects how rapidly the input is changing as it encounters it, and behaves accordingly.

34 Re-coupling the tasks Some Basic Questions
1. Does it have a closed form? 2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

35 Re-coupling the tasks To answer this, construct 1D normal incidence wave field data based on some layered Earth models: Model 1. (basic) Model 2. (structure) Model 3. (contrast) Model 4. (contrast) z We need then to choose some 1D models, use them to produce Born approximations as input, compute sufficient terms in this formula (What is sufficient? ANOTHER QUESTION!) and see what happens. It is reasonable to postulate that contrast might be an interesting factor in all this, so lets choose 4, one basic simple, low contrast model, one low contrast but with a bit more structure, and then two more with slightly higher contrasts. Some numbers, in the first example the jumps are from 1500 – 1600 m/s, and in the last the jumps are from 1500 – 2200 m/s. We can create synthetic data from these, and essentially integrate them to produce a set of Born approximations. Of course in any real example of this 1D normal incidence kind, one is faced with the estimation of a high-fidelity Born approximation. The problem in this configuration is precisely that of bandlimited impedance inversion, e.g. Walker and Ulrych in Here we can see in blue the Born approximations and in red the true models. Clearly the Born approximation is as we might expect inaccurate in its location of the reflectors and the amplitudes of the estimated acoustic parameter.

36 Re-coupling the tasks …which produce associated full-bandwidth data:
(basic) Data 2. (structure) Data 3. (contrast) Data 4. (contrast) z We need then to choose some 1D models, use them to produce Born approximations as input, compute sufficient terms in this formula (What is sufficient? ANOTHER QUESTION!) and see what happens. It is reasonable to postulate that contrast might be an interesting factor in all this, so lets choose 4, one basic simple, low contrast model, one low contrast but with a bit more structure, and then two more with slightly higher contrasts. Some numbers, in the first example the jumps are from 1500 – 1600 m/s, and in the last the jumps are from 1500 – 2200 m/s. We can create synthetic data from these, and essentially integrate them to produce a set of Born approximations. Of course in any real example of this 1D normal incidence kind, one is faced with the estimation of a high-fidelity Born approximation. The problem in this configuration is precisely that of bandlimited impedance inversion, e.g. Walker and Ulrych in Here we can see in blue the Born approximations and in red the true models. Clearly the Born approximation is as we might expect inaccurate in its location of the reflectors and the amplitudes of the estimated acoustic parameter.

37 Re-coupling the tasks …finally from which we construct the linear inverse 1: Linear 1. (basic) Linear 2. (structure) Linear 3. (contrast) Linear 4. (contrast) z We need then to choose some 1D models, use them to produce Born approximations as input, compute sufficient terms in this formula (What is sufficient? ANOTHER QUESTION!) and see what happens. It is reasonable to postulate that contrast might be an interesting factor in all this, so lets choose 4, one basic simple, low contrast model, one low contrast but with a bit more structure, and then two more with slightly higher contrasts. Some numbers, in the first example the jumps are from 1500 – 1600 m/s, and in the last the jumps are from 1500 – 2200 m/s. We can create synthetic data from these, and essentially integrate them to produce a set of Born approximations. Of course in any real example of this 1D normal incidence kind, one is faced with the estimation of a high-fidelity Born approximation. The problem in this configuration is precisely that of bandlimited impedance inversion, e.g. Walker and Ulrych in Here we can see in blue the Born approximations and in red the true models. Clearly the Born approximation is as we might expect inaccurate in its location of the reflectors and the amplitudes of the estimated acoustic parameter.

38 Re-coupling the tasks Linear 1. (basic) Linear 2. (structure) Linear 3. (contrast) Linear 4. We also know from the series setup that the higher terms are going to construct something that is added to the Born approximation, and in so adding will create the true perturbation seen in red. In other words we know what something that simultaneously images and inverts must do: it must create a discontinuous signal, in this case, a set of boxes, or sum of Heavisides, of very specific amplitudes and with very specific step locations. Clearly, if SII is going to image and invert, in this situation it must construct a sequence of piecewise constant functions, such that when added to the linear inverse, the true perturbation is recovered.

39 Re-coupling the tasks Brute implementation…
Truncation of SII after third order… As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

40 Re-coupling the tasks Brute implementation…
Truncation of SII after third order… unstable! As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

41 Re-coupling the tasks It seems natural to attribute this to the high-wavenumber portions of the derivative operators. We appeal to TSVD-type operator regularization: d/dz d2/dz2 As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken… d3/dz3 d4/dz4

42 Re-coupling the tasks Then for one of the examples, compute ~70 terms:
As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

43 Re-coupling the tasks Some Basic Questions
1. Does it have a closed form? 2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

44 Re-coupling the tasks The accuracy depends on the contrast (leading order imaging & partial inversion): So far we’ve seen a formula that analytically produces many of the terms associated with the Inverse scattering series tasks of imaging and inversion; by comparing it to simple Taylor’s series approximations, have been able to predict or explain the numerical behaviour of its implementation, and we have seen it work in these synthetic examples. Let’s look at reconstructions for the other models. In the case of model 2 which had a bit of extra structure, the reconstruction has worked very nicely. In the higher contrast cases we start seeing some interesting behaviour. First off in both high contrast cases, computation of enough terms to fill in the model out to the higher frequencies was not possible – the solution diverged. So I settled for lower order constructions, hence these are a little more bandlimited. But they still delineate the desired structures very well. Even more interesting, at the higher contrast we start to see location inaccuracies, particularly in model 4. Analysis of the approximations made in creating the formula for SII suggests that the approximations should start disagreeing with reality at large reflection coefficients. This seems to be substantiated by these results. Still, in comparison with the Born approximation these models are very good.

45 Re-coupling the tasks The accuracy depends on the contrast (leading order imaging & partial inversion): So far we’ve seen a formula that analytically produces many of the terms associated with the Inverse scattering series tasks of imaging and inversion; by comparing it to simple Taylor’s series approximations, have been able to predict or explain the numerical behaviour of its implementation, and we have seen it work in these synthetic examples. Let’s look at reconstructions for the other models. In the case of model 2 which had a bit of extra structure, the reconstruction has worked very nicely. In the higher contrast cases we start seeing some interesting behaviour. First off in both high contrast cases, computation of enough terms to fill in the model out to the higher frequencies was not possible – the solution diverged. So I settled for lower order constructions, hence these are a little more bandlimited. But they still delineate the desired structures very well. Even more interesting, at the higher contrast we start to see location inaccuracies, particularly in model 4. Analysis of the approximations made in creating the formula for SII suggests that the approximations should start disagreeing with reality at large reflection coefficients. This seems to be substantiated by these results. Still, in comparison with the Born approximation these models are very good.

46 Re-coupling the tasks Basic Questions 1. Does it have a closed form?
2. How does it intend to operate upon the data? 3. Can it be stably implemented numerically? 4. What is the impact of its approximate form? This subseries may be manipulated theoretically and numerically to produce stable output, with accuracy depending on contrast. The imaging and inversion are both carried out with a single flexible operator that knows when to image, when to invert. As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

47 The relationship between resolution and truncation order
We have seen the “mess” prior to convergence of this subseries. Is there interpretable behavior here? Especially: is z the best domain to observe this? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

48 Resolution and truncation order
The numeric behaviour of the series, and how it converges, can be qualitatively predicted/described. Some things we know: (1) we will be constructing a discontinuous function, i.e., in the k-domain it will have elements resembling ~F(k)eikC (2) we will be doing it via a series of derivatives: i.e., ~ … + (ik)n G(k) + … Before we even start computing we can make some basic predictions about how the series is going to behave as it constructs these things. Let’s cover two things we know for sure: first, as I mentioned, we know we need to construct a discontinuous function – in this case it will be a sum of Heaviside functions, but let’s write it in the Fourier domain as something a little more general, as something with elements in it of the form F(k) exp(ikC), in other words an exponential for a fixed C over a frequency interval, or a set of k values. So we know we are building something like this. The second thing we know is that we are going to be doing it with a series in increasing orders of derivatives of something. Which means that in the frequency domain it will have a form like this, where each term is some, possibly complicated function of k, but is multiplied by the n’th derivative operator, ik to the n.

49 Resolution and truncation order
In other words, this is not far from the construction of an exponential function from an infinite series of polynomials: n=1 Using a series to approximate an exponential, in which each term is an increasing power of the desired argument, probably sounds familiar – it sounds like there may be a lot of similarity between the behaviour of this SII formula and a Taylor’s series approximation of an exponential function. This isn’t too surprising in fact, since the scattering series is viewed as a generalized Taylor’s series. Well, what do you see when you do a Taylor’s series expansion for exp(-x) over a range of x? Let’s have a look… Over n=1 through n=18 for this example, we can see the desired function being stitched out. At any particular truncation of the series, there are roughly two parts of the approximation: a region in which the approximation has largely converged, and which stops changing with increased terms, and a region where the approximation diverges very rapidly, at something like the rate of x^n where n is your cutoff order. This is a basic aspect of the series approximation of exponentials – the series is convergent for any value of x, but the convergence rate and behaviour is a nonnegligible effect on the approximation over a prescribed interval. We might well expect to see behaviour of this sort in the SII formula given their similarities… x 

50 Resolution and truncation order
In other words, this is not far from the construction of an exponential function from an infinite series of polynomials: n=1 n=6 Using a series to approximate an exponential, in which each term is an increasing power of the desired argument, probably sounds familiar – it sounds like there may be a lot of similarity between the behaviour of this SII formula and a Taylor’s series approximation of an exponential function. This isn’t too surprising in fact, since the scattering series is viewed as a generalized Taylor’s series. Well, what do you see when you do a Taylor’s series expansion for exp(-x) over a range of x? Let’s have a look… Over n=1 through n=18 for this example, we can see the desired function being stitched out. At any particular truncation of the series, there are roughly two parts of the approximation: a region in which the approximation has largely converged, and which stops changing with increased terms, and a region where the approximation diverges very rapidly, at something like the rate of x^n where n is your cutoff order. This is a basic aspect of the series approximation of exponentials – the series is convergent for any value of x, but the convergence rate and behaviour is a nonnegligible effect on the approximation over a prescribed interval. We might well expect to see behaviour of this sort in the SII formula given their similarities… x  x 

51 Resolution and truncation order
In other words, this is not far from the construction of an exponential function from an infinite series of polynomials: n=1 n=6 n=13 Using a series to approximate an exponential, in which each term is an increasing power of the desired argument, probably sounds familiar – it sounds like there may be a lot of similarity between the behaviour of this SII formula and a Taylor’s series approximation of an exponential function. This isn’t too surprising in fact, since the scattering series is viewed as a generalized Taylor’s series. Well, what do you see when you do a Taylor’s series expansion for exp(-x) over a range of x? Let’s have a look… Over n=1 through n=18 for this example, we can see the desired function being stitched out. At any particular truncation of the series, there are roughly two parts of the approximation: a region in which the approximation has largely converged, and which stops changing with increased terms, and a region where the approximation diverges very rapidly, at something like the rate of x^n where n is your cutoff order. This is a basic aspect of the series approximation of exponentials – the series is convergent for any value of x, but the convergence rate and behaviour is a nonnegligible effect on the approximation over a prescribed interval. We might well expect to see behaviour of this sort in the SII formula given their similarities… x  x  x 

52 Resolution and truncation order
In other words, this is not far from the construction of an exponential function from an infinite series of polynomials: n=1 n=6 n=13 n=18 x  Using a series to approximate an exponential, in which each term is an increasing power of the desired argument, probably sounds familiar – it sounds like there may be a lot of similarity between the behaviour of this SII formula and a Taylor’s series approximation of an exponential function. This isn’t too surprising in fact, since the scattering series is viewed as a generalized Taylor’s series. Well, what do you see when you do a Taylor’s series expansion for exp(-x) over a range of x? Let’s have a look… Over n=1 through n=18 for this example, we can see the desired function being stitched out. At any particular truncation of the series, there are roughly two parts of the approximation: a region in which the approximation has largely converged, and which stops changing with increased terms, and a region where the approximation diverges very rapidly, at something like the rate of x^n where n is your cutoff order. This is a basic aspect of the series approximation of exponentials – the series is convergent for any value of x, but the convergence rate and behaviour is a nonnegligible effect on the approximation over a prescribed interval. We might well expect to see behaviour of this sort in the SII formula given their similarities… …and we can reasonably expect it to act in a similar way, but with weighted powers of (-ik) rather than x.

53 Resolution and truncation order
Let us see by applying SII to Model 1. In the Fourier domain the true perturbation (z) looks like: real Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

54 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 2 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

55 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 6 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

56 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 10 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

57 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 14 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

58 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 18 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

59 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 22 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

60 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 26 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

61 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 30 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

62 Resolution and truncation order
Superimposing SII(z) on top and evaluating the series at increasing maximum order… real order 34 Let’s see if that’s true. Here in black, in the k-domain, are the real and imaginary parts of the true model, the thing we are trying to construct with SII. Obviously it is a little more complicated than e^-x. Then let us overlay upon the desired model, in blue, the output of the SII formula for a chosen maximum number of terms. Approximation order, in other words. Then lets gradually increase the number of utilized terms and watch what happens… imag.

63 Intermediate comments
The inverse FT of these truncated approximations will be dominated by the non-convergent (high wavenumber) portions of the spectrum; that mess in the z domain is not particularly informative. Instead, after each term has been added, suppress those parts of the spectrum that “take off”. Then IFT. Then we are looking at an informative evolution of the constructed model with truncation order… So, have we seen what we are expecting to see? Yes, pretty much – the SII formula is behaving like the exponential approximation, at each order, there are two regions in the approximation, one from k=0 up to some kn at which the approximation is true and largely unchanging, and the other beyond kn that diverges very rapidly. Remembering that we are looking at this construction in the frequency domain, this actually allows us to make some simple but I think quite useful observations: first and foremost, since the approximation is fixed and unchanging in the k=0 -> k_n interval, and this interval increases with the approximation order, we may make the observation that a LOW ORDER MODEL CONSTRUCTIN IS A LOW FREQUENCY APPROXIMATION. Adding orders adds high frequency information. At some order we reach the point where the approximation has converged for all frequencies up to the Nyquist, and nothing further occurs. Second, at any order before this, a region of very rapid divergence of the construction in the frequency domain occurs. Naïve inverse fourier transforming of this approximation will give you a complete mess. Specifically it is going to look as if you applied a very high order derivative operator to your model – not pretty. So at many lower approximation orders one needs to be aware of this additional frequency interval that can lead to instability.

64 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 5 Obviously this divergent unstable portion of the model construction is not useful information. Let’s truncate that, and have a look in the space domain, at the true order by order model construction. The diagrams have three parts, first…

65 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 7 Obviously this divergent unstable portion of the model construction is not useful information. Let’s truncate that, and have a look in the space domain, at the true order by order model construction. The diagrams have three parts, first…

66 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 9 Obviously this divergent unstable portion of the model construction is not useful information. Let’s truncate that, and have a look in the space domain, at the true order by order model construction. The diagrams have three parts, first…

67 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 11

68 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 13

69 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 15

70 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 17

71 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 19

72 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 21

73 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 23

74 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 25

75 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 27

76 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 31

77 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 33

78 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 41

79 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 47

80 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 51

81 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 57

82 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 61

83 Resolution and truncation order
The unstable portion of the spectrum k > kn at any n is not useful information. Consider the construction of SII(z) in the z-domain, truncating the unstable wavenumbers… order 67 OK so this illustrates the low to high – frequency reconstruction with low to high use of series terms.

84 Another interim summary
(1) There is a direct relationship between the resolution of the reconstructed perturbation SII(z) and the order at which the subseries is truncated in a numerical implementation. (2) To wit: a low-order truncation is a low-resolution estimate. (3) To the extent that this is characteristic of the inverse scattering series for imaging/inversion at higher dimensions and for more complex geometry, this could suggest a crucial trade-off between desired resolution and computation effort. (4) It suggests a means by which to interpret the output of current low-order testing. So, have we seen what we are expecting to see? Yes, pretty much – the SII formula is behaving like the exponential approximation, at each order, there are two regions in the approximation, one from k=0 up to some kn at which the approximation is true and largely unchanging, and the other beyond kn that diverges very rapidly. Remembering that we are looking at this construction in the frequency domain, this actually allows us to make some simple but I think quite useful observations: first and foremost, since the approximation is fixed and unchanging in the k=0 -> k_n interval, and this interval increases with the approximation order, we may make the observation that a LOW ORDER MODEL CONSTRUCTIN IS A LOW FREQUENCY APPROXIMATION. Adding orders adds high frequency information. At some order we reach the point where the approximation has converged for all frequencies up to the Nyquist, and nothing further occurs. Second, at any order before this, a region of very rapid divergence of the construction in the frequency domain occurs. Naïve inverse fourier transforming of this approximation will give you a complete mess. Specifically it is going to look as if you applied a very high order derivative operator to your model – not pretty. So at many lower approximation orders one needs to be aware of this additional frequency interval that can lead to instability.

85 Incoherent noise The presence of derivative operators suggests sensitivity of SII – and by extension imaging – to random noise. However, we have also seen: 1. A stabilization of the derivatives via TSVD-like regularization is already in place. 2. Low-order truncation will be less susceptible: converges in fewer terms, involves lower kz. Consider the exact reconstruction of a previous model in the presence of Gaussian noise at ~1% maximum data amplitude. As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

86 Some practical questions
Operators like the one we are looking at are notoriously sensitive to incoherent noise. True? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

87 Some practical questions
Operators like the one we are looking at are notoriously sensitive to incoherent noise. True? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

88 Some practical questions
Operators like the one we are looking at are notoriously sensitive to incoherent noise. True? As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

89 Summary (1) Many of the terms in the current imaging/inversion task-separated milieu may be captured with a simple, computable subseries (in 1D n.i.). (2) A closed form may be derived; also, analysis of its term-by-term behaviour suggests that it accomplishes imaging and inversion simultaneously through the implementation of a flexible operator, that is sensitive to the “regularity” of the input (3) Numerically a TSVD-like stabilization of the derivative operators is necessary to produce interpretable results. When used, the amplitude and location of acoustic contrasts is seen to be corrected with some location error at high contrast. (4) Convergence is wavenumber-dependent: low-order truncation of the computation results in a low-resolution reconstruction. A resolution/computational efficiency trade-off is indicated. As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…

90 Summary (5) The subseries is numerically sensitive to noise, but reconstruction error and data error are within the same order of magnitude (depending on TSVD) (6) A method for spectral-extrapolation to handle missing low frequency information is suggested and seen to work numerically on (not too challenging) input examples. Future: extension of these quantities and methods to offset/multi-D situations. Look to these lessons when utilizing imaging/inversion subseries individually… As a first test let’s analytically expand the generating function over a few orders, here 1 through 3: first note that the leading order imaging subseries is reproduced in this expansion. Second note something very similar to the inversion subseries is reproduced (which in fact diverges at high order away from the full inversion subseries because of term dropping). In other words, this formula appears to be intimately related to terms that have elsewhere been identified as both imagers and inverters. And one might reasonably postulate that if we apply this formula as a processing/inversion algorithm, perhaps we will see these tasks being undertaken…


Download ppt "Coupling the Imaging & Inversion Tasks:"

Similar presentations


Ads by Google