I was there when it all happened, a long time ago. Of course I was on the opposite side commercially. Time series limitations - Frequency domain methods.

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

Proving ADAPS via honest results. No well data is input
North Sea faulting How the ADAPS inversion / integration clarifies the strike-slip structure – by David Paige This study was originally proposed September,
Introducing the piggy back noise problem. This is what we saw visually on the 3D gather data. Strong and persistent cps waves riding on very low.
Wedge Modeling HRS-9 Hampson-Russell Software
Extreme Programming Alexander Kanavin Lappeenranta University of Technology.
1.The seismic energy continuum consists of thousands of independent primary reflections, each coming from a single reflecting interface. 2. There is no.
In the next series I present this set of cutouts and try to explain that each shows a refraction event that was lifted off. To see the events on the full.

A North Sea processing & Interpretation story. The same well image was shown on both the before and after sections above. As you may have noted, it is.
North Sea Strike Slip Fault Study – The ultimate before and after. To the left you see the best the client could do on a “well-cutting” cross line. To.
The ADAPS visual approach to interpretation When Dr. Robin Westerman talked Nexen into trying ADAPS I was presented with the display of the problem shown.
Brian Russell #, Larry Lines #, Dan Hampson. , and Todor Todorov
Statistical Issues in Research Planning and Evaluation
The evolution of the down wave. We hit the earth with some sort of an impact which results in a movement. Since the earth is elastic, it rebounds past.
Environmental and Exploration Geophysics II
GG450 April 22, 2008 Seismic Processing.
De-noising Vibroseis The contention is that the correlated Vibroseis field record (to the far left) is a mess of overlapping coherent noise and signal,
Methods of Exploratory Data Analysis GG 313 Fall /25/05.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
Another example of critical angle refraction noise.
Algebra Problems… Solutions
 A data processing system is a combination of machines and people that for a set of inputs produces a defined set of outputs. The inputs and outputs.
There is a time for ultra seismic accuracy but that comes after we have located something exciting enough to look at. Since migration before stack almost.
Going for the RED is an approach used by older interpreters. But for red to indicate hydrocarbons (like this example), lots of things have to be true that.
Shale Lime Sand The argument for non-linear methods. 1 The geology 2. The reflection coefficients (spikes in non-linear lingo). 3. The down wave 4. Its.
So where does the ADAPS optimization method fit into this framework? The seismic trace equation does not fit the linear definition, since the values on.
Seismic is not too complex for geologists - If you can understand convolution, you have it made. Simply stated, when downward traveling waves pass by a.
Why determining an exact waveform is next to impossible. We start at the recording point with these facts – 1. The seismic continuum consists of overlapping.
The Writing Process Introduction Prewriting Writing Revising
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 6 – Multiple comparisons, non-normality, outliers Marshall.
Some background and a few basics - How my inversion works – and why it is better - How added resolution makes parallel fault picking a possibility – The.
Welcome to a before and after coherent noise removal series. There is a lot to explain about what is going on here, so I am using this otherwise wasted.
Recursion Chapter 7. Chapter Objectives  To understand how to think recursively  To learn how to trace a recursive method  To learn how to write recursive.
A list of inversion error causes that all attribute junkies should really understand: 1.Definition of inversion – A seismic trace is the product of the.
Deconvolution Bryce Hutchinson Sumit Verma Objectives: -Understand the difference between exponential and surface consistent gain -Identify power line.
Depth point 1 A study of the effects of early critical angle crossings. From data I (Paige) had previously processed to check out my sonic log synthesis.
ADAPS optimized stack of line 401 (no inversion or integration). Please toggle with conditioned version I start with the Paige optimized stack v.s. the.
Moving Around in Scratch The Basics… -You do want to have Scratch open as you will be creating a program. -Follow the instructions and if you have questions.
1 Psych 5500/6500 Standard Deviations, Standard Scores, and Areas Under the Normal Curve Fall, 2008.
CSC 211 Data Structures Lecture 13
Time series Model assessment. Tourist arrivals to NZ Period is quarterly.
Section 10.1 Confidence Intervals
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Because noise removal is central to my later work, I start with a discussion on how intertwined coherent noise creates a random effect that confuses all.
Correlation – Recap Correlation provides an estimate of how well change in ‘ x ’ causes change in ‘ y ’. The relationship has a magnitude (the r value)
ADAPS multiple removal demo. The upper halves of the slides in this series show the input gathers.The bottoms show the same data with multiples removed.
1 Choosing a Computer Science Research Problem. 2 Choosing a Computer Science Research Problem One of the hardest problems with doing research in any.
 In this packet we will look at:  The meaning of acceleration  How acceleration is related to velocity and time  2 distinct types acceleration  A.
Sight Words.
Visual interpretation is still the best. You can see reservoir possibilities that have been missed, and do it at a fraction of the normal cost. The tougher.
Welcome to a wild ride through ideas. In this show I am suggesting that the multiple fractures associated with strike slip faulting can accomplish the.
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
How the Saudi / shale oil battle should make us stop & think. (and I don’t mean about geo-politics). The shale production reality reminds us the Saudi.
Why inversion & integration is needed to see stratigraphy.
Can we see shale fractures? Some claim to already doing it but I have my doubts. I believe I’m close, but not there yet. I’d like opinions. The section.
Probing the question of “how good can seismic get? I ask you to spend a good amount of time just looking at the amazing detail this section shows. When.
Info Read SEGY Wavelet estimation New Project Correlate near offset far offset Display Well Tie Elog Strata Geoview Hampson-Russell References Create New.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 21 More About Tests and Intervals.
Launch Abby and Zack are mixing red and yellow paint to make an orange color to paint their kitchen table. They each think they have the perfect shade.
CS 591 S1 – Computational Audio
Face Detection Viola-Jones Part 2
Theory of Computation Turing Machines.
From Raw Data to an Image
BEFORE AFTER Let’s start by examining this particular sonic log match. We have super-imposed it both on the input (before) and on the output (after). Some.
Module Recognition Algorithms
The general output of the leading-order attenuator
Presentation transcript:

I was there when it all happened, a long time ago. Of course I was on the opposite side commercially. Time series limitations - Frequency domain methods are currently accepted as seismic gospel. I point out some conceptual problems when it comes to inversion. The ADAPS approach - There are vast differences in nonlinear inversion approaches. ADAPS uses general optimization logic, analyzing large chunks of raw data in an iterative manner. Linear vs. non-linear inversion - Why linear systems stop short of true spiking, the distribution of information in the frequency domain and how non-linear processes can redistribute error. The glory of sonic log simulation - the need for intelligent detuning, the purpose of integration and the fallacy of power spectra after inversion. Matching to well logs verifies the inversion process - Sonic log images consist of integrated reflection coefficients and that is all we are interested in. If you can get me the pre-stack gathers I will give you a PowerPoint report. The birth of the frequency domain - Intelligent inversion opens exciting new interpretation possibilities - Click on an icon to enter topic The ADAPS offer - Untangling seismic tuning is the challenge we should focus on – just collapsing the wavelet is not enough!

Back before the digital era seismic filtering was analog and clumsy. Then an industry consortium came up with the time series mathematical concept that modeled time data by describing it in terms of frequencies, calling the required code a “transform”. Once there they found it possible to return to the original form with manageable error. This allowed filter design to be done in the frequency domain, where it was much easier. The industry fell in love with this beautiful concept and most researchers today seem to assume a seamless mathematical connection between the two extremes. When all this was happening, I had hired on with Western Geo. as manager of digital operations, with the responsibility of helping evaluate a time series package bought from the MIT team that was the heart of the mentioned consortium. For the life of me I could not see it doing any good. When the Western R&D boss proudly showed me a section where the deep half had been drastically changed, I was able to point out that the lower part was an exact duplicate of the top. Late, when I examined filters the package had designed, I found all the action always concentrated at the very front, which did not fit my idea of a system that could effectively remove side lobes. So I wrote the first predictive deconvolution in the time domain. It worked well enough to put Western first in digital processing, and they used it for years. I spent the rest of my Western time defending it against the time series researchers. It was a non-linear pioneer, iterating to improve the down wave guess, and the ancestor of my current ADAPS system. After two successful years I left Western as an employee and they hired one of the MIT experts to pick my brain (in terms of my time domain iterative program). I had been giving talks to geophysicists around the country, with no communications problem, yet the two of us did not seem to connect. It was not until the mathematical guru was able to put a few formulae on the board that progress was made. It was then I realized how different our thinking processes were. It was not just that I did not fully understand him, but that he could not think in my logical terms either. This is our real communication problem.

The challenge for most readers will be to back off the “top down” mathematical certainty inherent in other approaches. A few years back I made a presentation to a group of BP researchers. They seemed impressed with the results but wanted mathematical proof of the procedures I had used. While that stopped the show, this illustrates the chasm between the mathematical and the logical lines of thought. Obviously, at least to me, iterative optimization cannot be proven by closed formulae. Taking this thought farther, if current time series systems really produced the promised results there would be no need for creative, non-linear thinking. In other words I start with the visible truth in the form of remarkable well matches. So, the question becomes one of finding where the time series thinking falls short. Transforms are mathematical modeling devices – Weather forecasters are proud of their models, but we all know how flaky they can be. At the same time we understand there are variables beyond the predictive capabilities of their modeling that can dominate the statistics. The same thing is true moving back and forth between time and frequency. Transforms assume only one wave shape. For years the industry has swept the noise problem under the rug. About the only type they paid attention to was multiples. When I started serious pre-stack development at Ikon, I had to get even more obnoxious in my demands for gathers. In the dozens I worked with almost all had serious noise that needed attention. When you consider how most systems blindly work with stacked data you wonder how many results you can trust. I have often said I would die of a broken neck because I shake my head so much. The picture to the left is from my PowerPoint. It shows a 200+ gather set. The conflagration to our immediate left is caused by the emergence of a standing wave? coming from the chalk formation. It is probably happening all over the North Sea, yet we seem to have the entire industry ignoring it. The energy permeates the entire vertical section. I named the show AVOanyone for a good reason. When we came on the Nexen scene all we heard had to do with “angle stacks”. The troubling thing is, that is where the conversation still is. But I deviate from the point. There may be a higher set of mathematical logic that can tie all of this together, but we are not there yet. For now, being able to drive individual components using non-linear means makes sense to me. To , click on the bomb.

Before I describe how ADAPS works I show some proof. Comparing an input stack with the ADAPS simulated sonic log output. For starters, open your mind to this display protocol. It keeps the well match in perspective and shows the true amplitude relationships where they are meaningful. A before and after that illustrates the absolute need for intelligent detuning. Once we know the probable stratigraphy it is easy to go back and see hints in the stacked input. ADAPS has completely re-arranged the input energy, making sense out of the tangle. Now we can see an obvious on-lap and also we probably know where the well should have been drilled. There probably is faulting involved and we have not quite clarified that on this one simulated sonic log section (a 3-d study gave us a pretty good idea however). Of course the well match tells us that ADAPS knew what it was doing when it removed all those extra lobes. This is seismic de-tuning at its max. The emphasis of ADAPS is on exploration, including new reservoir detection and reservoir extension. Please spend some time on the well matches, before and after. Note that leftward nodes should equate to red, and right handed ones to blue. On the “before” match you will see this is almost never the case, whereas below we have a fine level of agreement. If you are not understanding the importance of this we have failed to communicate. The well images used are integrated reflection coefficients, since relative bed velocity is the only attribute of interest to us in this all-important match. So - ADAPS is an optimizing tool. It works in the time domain and its “advanced pattern recognition” is the primary driving mechanism - It uses no well input – It makes process error manageable by shifting it into the statistical world, enabling true “spiking” and subsequent sonic log simulation. Resolution takes on a new face. Optimization is a recognized means of solving complex problems. After pre-stack cleanup, ADAPS collects a defined matrix of input traces and runs a preliminary lobe analysis to get a first wavelet guess. It then uses this guess to establish a set of reflection coefficient guesses, or spikes. Each time a spike is established, the system lifts off the “energy” used, the goal being to explain the entire trace. At the end of the trace loop, these spikes are used to improve the wavelet guess. A complex set of overlapping iterations continues until the job is done on this trace. The system then continues with the next trace until the defined set is finished.. ADAPS does not trust industry techniques of establishing synthetic traces. Further, because it is tied to what it sees statistically, it has no way of using well log input. Of course results prove the wisdom here. Frequency domain approaches have to limit their spiking goals. As I try to show next, they become unstable if pushed too far. The same error faces ADAPS, but it can make the best guess statistically possible, without going wild. This enables it to integrate those guesses, producing a simulated log. Error is now spread in a manageable way. ADAPS seismic resolution depends on pattern recognition accuracy and not on some artificial frequency assumptions. We are after a spread of thicknesses of course so evaluating on the basis of power spectra is somewhat specious. Again, the beauty of our well matches is our justification.

“Shape” is the thing - The goal of any inversion is to replace the individual reflected events with approximations of the contributing reflection coefficients. To do this we must start with the shape of the down wave as it passed through this complex subsurface combination. ADAPS determines this shape by statistical averaging, all in the time domain. What we can see is what we get could be our motto. See previous slide. Most other current approaches model the shape by measuring frequency attributes. Their assumption here is that wave shape can be accurately determined using the averaged content of the target trace group. The idea is that the final computed frequency spectrum can then be compared to a theoretical ideal, and filters designed to transform the time data to where it would have that ideal spectrum. The problem lies in how vital information is distributed. If our wavelet consisted of a single frequency continuous train, describing it as a single spike in the frequency domain would be simple. As soon as we cut it off at some finite time, we have to introduce a ton of frequencies to achieve the dampening. Taking that to the real world, when significant side lobes are involved the vital information gets spread out. The same is true as the wavelet gets more complex. This spreading of the vital information makes the process more susceptible to the influence of noise. Asking linear algorithms to come up with single spike answers drives them to instability, so they back off. The best they can do? ADAPS predicts single spikes based on the best pattern recognition matches it can make under the noise that exists. Error becomes eminently manageable. Collapsing wavelets does not finish detuning as the following before and after shows. The point is that the extensive whitening applied to the data before we got it did collapse the wavelet. However it was left to ADAPS to complete the detuning process by integrating the individual spikes. Of course, to do this the spikes had to reasonably equate to reflecting interfaces. When inversion is not perfect, low frequency corrections must be made. ADAPS keeps track of this error and builds it into the driving logic. Again I rest on the proof shown in examples like this, and ask the reader to let me know if someone else can do better. Technical closure becomes more and more important with time. I ask you to once again study the well matches. I do it all the time. They are not perfect here, but the simulated sonic log at the bottom obviously comes much closer to what we want. Getting there is not simple, and pre-stack optimization is always part of the struggle. One never knows when an improvement might make all the difference in an interpretation, but the justification is obvious. The mythology of frequency limits and analysis – Time after time, people would suggest that ADAPS results did not show the high frequency improvement in their power spectra they had expected. My somewhat impatient reply has been that they obviously did not understand that outlining thick beds was a major goal, and that sonic log data was not sinusoidal. If we see sharp, thin beds on the logs, we expect our system to find them (and have seen many great cases of course). High frequency (thin bed that is) limits are imposed by how well we correlate, using our own pattern recognition tools. Obviously we are limited by sampling rate (we interpolate to one ms.), and general data quality, but we have seen many examples of thin bed accuracy. Our ideal is to see a broad match of course, but we have nature’s filtering to contend with.

The glory of sonic log simulation - The first example, from illustrates how bed thickness helped clarify a complex strike slip fault pattern. The stars mark a known shale that lies just above the target sands. In the show you can see the fault breaks develop. The fact that the obvious cross fault correlation did not seem to continue to the right of the green fault helped identify it as the main lateral movement feature in the prospect. While the existing interpretation saw it, the ancillary faulting (which I consider trapping) was not identified before ADAPS. The second, from shows an extremely probable reservoir at the upper left. The presentation illustrates how this lead was not visible on the input stack because of wave trap noise emanating from the bed. In any case the ability to make sense of the actual bed velocity enables this capability. Of course the need for intelligent detuning is the subject of the discussion.

I’m not crazy about industry synthetics We can appreciate that rock physicist want to extract all sorts of attributes from seismic, but we are only interested in what actually creates reflections. While ADAPS does not want any help in the actual inversion process, we base its reputation on matching its output to well log information. I have been singularly unimpressed with the initial comparisons I have been given, all of which involved the generation of synthetic seismograms. In the first place I have little faith in the ability of current software to model the down wave that they use in the process. In today’s super whitened data these wavelets seem way too leggy. So, I ask either for finished sonic log images, or for velocities files. ADAPS has a module that computes reflection coefficients and then interpolates them, applying a low frequency averaging based on the observed wave lengths..

Does your data pass the smell test? Before you put too much faith in AVO or inversion results you should know. This type of wave trap generated noise is actually quite common, but one has to look. Before you start comparing inside-middle-outside trace stacks, a pre-stack analysis is helpful. Ikon Science has bought the proprietary rights to my software so I have none to sell. However I have the right to use my improved version on a service basis, a computer bank and an addiction for seismic exploration. The need to prove the merits of my creation is paramount, and I am willing to take all the risk looking at your data. If you can supply me with an adequate volume of gathers I will process enough to prepare a report. If you are happy with the results I will bill you $5000 which you can opt to pay or not. Click here for noise discussion. Or here to see examples. Or here for a report example. Or for the site base. Or here to start over. Click on image to me