Download presentation
Presentation is loading. Please wait.
Published byCorey Phelps Modified over 9 years ago
1
Workflow Neural network object detection & interpretation
2
Overall workflow 1.Project set-up 2.“Scanning” the volume, select key lines and slices 3.“Object detections” 4.Integration and interpretation
3
1 project set-up: steps Survey set-up: –fill out known survey ranges & corners –or get these from scanning SEGY file –or get these from Seiswork/Geoframe using Arkcls data link Import data: –volumes (SEGY/Arkcls) –surfaces (ASCII/Arkcls) –wells (track+TD+las/Arkcls) Calculate steering cube
4
1 project set-up: steps Survey set-upImport data
5
1 project set-up: calculate Steering Cube What: the Steering Cube contains the dip of the seismic events in inline and crossline direction at every sample point Why: –the dip itself is an attribute –the dip is used to implement other attributes corrected for structure –the dip is used for Structurally Oriented Filtering (SOF) –the dip is used to calculate chrono-stratigraphy (SSIS plugin) Polar Dip: dip is an attribute Steered Similarity: dip is guiding an other attribute
6
1 project set-up: calculate Steering Cube Concept of dip- steering Attributes are guided along a three dimensional surface on which the seismic phase is approximately constant Steered similarity (back-ground on slice) vs none steered similarity (fore-ground slice). Note that the steered similarity has higher contrast, higher resolution and less spurious events Effect of dip steering
7
1 project set-up: calculate Steering Cube Quality vs speed: –FFT precise: best result, very slow –FFT standard: good result, acceptable speed –BG fast steering: sensitive to noise, very fast Size: try to capture at least ½ T (dominant wave period) dGB default: BG fast steering 3x3 followed by median filter 1x1x3 Steering Calculation
8
1 project set-up: calculate Steering Cube Median filter the Steering Cube: –Do this as a separate step after calculating the steering cube so you will have both products –The steering cube: very detailed, follows small scale structural change, noise sensive (spikes and zero-crossings) –Median filter of steering cube: background structural trends, noise removed Set-up for median filter of Steering Cube: –time window: at least ½ T (dominant wave period) –lateral step: at least 2 times the lateral step of faults and other small scale structure to be filtered Median filter of Steering Cube
9
1 project set-up: calculate Steering Cube When to use the Steering Cube and when to use the Median Filtered Steering Cube: –Do you want to emphasize small scale or large scale structures –Is dip the attribute or used to steer another attribute ObjectiveDip isExamplesUse See small scale structures AttributeDip, statistics on dip, curvature … Steering Cube SteeringSimilarity, filteringMedian Filtered of Steering Cube See large scale change AttributeDip, statistic on dip, curvature … Median Filtered of Steering Cube SteeringSimilarity, filteringSteering Cube Reduce sensitivity to noise Attribute or SteeringSimilarityMedian Filtered of Steering Cube
10
2 scanning the volume Identify a number of lines and timeslices: –With typical examples you are looking for –Within areas of knowledge (wells) –Within areas of interest (objectives) –With possible pitfalls (e.g. where the object and non- object look very much alike) Use the key-lines for picking examples and QC (for blind-testing do not pick on every key-line) Use the save session and restore session to store useful configurations of lines, section, horizons, etc.
11
2 scanning the volume Typical OpendTect session for interpretation and integration. We see a time-slice with similarity attribute, inline with seismic and chimney cube as overlay. Also present is a well with well markers. A random line is connecting the different elements allowing a integrated interpretation. Sessions can be stored (save) and re-opened (restore). This allows the user to save useful configurations of the data between working sessions.
12
3 object detection: steps Attribute analysis Example picking Train NN Apply to key lines and QC –Iterate previous steps –Tests are satisfactory: apply to volume processing
13
3 object detection: steps The basic workflow to create an object detection type neural network. The yellow circles are user actions. The blue boxes are data. Note that user interactions on this basic flow scheme relate to the steps mentioned in the previous slide, except “QC on keylines” and “iteration of previous steps”. These steps can be added to the workflow to refine the final result.
14
3 object detection: steps WORKFLOW 1)attribute analysis... 0.01080 0.00321 Curvature.…….....Pick … 211.70.1951134421Pick 3 430.90.230982223Pick 2 …123.50.1121039421Pick 1 Etc.Local DipSimilarityEnergyLocation 2)Select representative train locations 3)Calculate seismic attributes 4)Feed calculated data to neural network and train 5)Apply neural network to data Output
15
In OpendTect there are default sets for detection of many object types (chimneys, salt, faults, …) How to use these default sets depends on user- experience, time- constraints and difficulty of the project: –New (or time-constrained) user: use default sets as they are. –Advanced user: modify and add attributes –Expert user/special projects: add and redesign many custom made attributes 3 object detection: attribute analysis Open default attribute set
16
3 object detection: attribute analysis Before starting, think about which attributes you would assign to the object of interest and what parameter settings will maximize the attribute (sometimes you need the same attribute multiple times, with different parameterization) An aid in selecting attributes and options is the attribute redisplay tool Neural networks are multi-attribute and non- linear: combined attributes may convey much more information than appears from the single attribute displays
17
Individual attributes often have non-uniqueness or completeness issues associated with them; the neural network will solve this, hence individual attributes do not have to be perfect To make an attribute as good as possible: use the movie-style attribute evaluation, to optimize the parameter settings Output of this step: an optimized set of attributes to be used in the following neural network training 3 object detection: attribute analysis
18
A (default) attribute set is a group of (optimized) attributes fit to highlight a particular seismic object. In subsequent training of neural networks all or a sub- selection of attributes may be used Frequency Similarity Attributes are similar but not equal. Each individual attribute does not highlight chimney completely or uniquely SeismicChimneyCube Attribute analysisNeural network The neural network integrates multiple attributes into one meta- attribute and is the transform between seismic and chimney cube
19
3 object detection: example picking Pick examples of the object and non-object Guidelines, try to: –Pick typical examples –Avoid picking of ambiguous examples (uncertain interpretation) –Do pick difficult examples (certain interpretation, but object and non-object look alike) –Avoid biased picking (one time level, only in high energy zones where S/N ratio is high, etc) –Have good spatial sampling (take note of this when selecting key lines!) –Constrain the solution by picking enough example points (for neural networks, using 10 to 20 input attributes, use at least 500 example points in training) Output of this step: example picksets associated with the different object classes. To be used in the following neural network training.
20
3 object detection: example picking Open/create new picksetPicking examples for a (chimney) object detection neural network Interact mode There are 2 or more picksets: object1, (object2,..,) background. Highlight the pickset to be manipulated in the data tree. Now you can add or remove pick locations for that set only chimneybackground
21
3 object detection: train neural network Train neural network using the attribute sets and example pick sets produced in the previous steps Evaluate neural network: which attributes are major inputs, which are insignificant Optimize the neural network, by trimming away uneconomical attributes (optional) Output of this step: a probability neural network (predicts object probability, one class), or classification neural network (predicts object classes, multi-class)
22
Feed-forward information 3 object detection: train neural network Layer of hidden nodes; computes features from input Examples (attribute vectors) with user-assigned target values (object-codes) are fed to the untrained neural network. These known examples are used by the neural network to learn –by back-propagation of prediction errors - characteristic features of each object. After finalizing the training the parameterization of the neural network is frozen and the static neural network is applied to the application set Example sets Training set Test set Application set (input only) Schematic set-up of supervised neural network W1W1 Input Hidden Output layer layer layer Back-propagate errors Target variables Non-linear transformations: WxWx
23
3 object detection: train neural network X1 X2 Neural Network X1 X2 Y Conceptual image of an trained neural network This display shows the multi-dimensional and non-linear character of neural networks. Input variables are input to a multi-dimensional, non-linear transform function for each node in the hidden layer of the neural network. These transform functions are recombined (using a weighted summation) in the output node(s) of the neural network. In effect almost any multi-dimensional, non-linear transform between input and output variables can be modeled.
24
Neural network training guidelines: –Train until the test set error has reached it’s minimum or the train set error levels out –it is often educational to train past the point of overtraining, but be sure to “clear” the neural network training and retrain to avoid applying a overtrained neural network –A normalized rms error of 0.7 and a misclassification of 0.25 can be considered good, but this also depends on how difficult the examples were picked –If you trim away uneconomical attributes, realize neural network prediction is non-linear and attribute may have unexpected contributions to the result. So compare error levels before and after trimming the neural network 3 object detection: train neural network
25
Train a neural network
26
3 object detection: train neural network Any subselection of attributes out of the active attribute set may be chosen to be input to the neural network. These attributes will be calculated “on the fly” both in training and application phase of the neural network. In addition any number of stored volumes can be input to the neural network. At least two target picksets must be defined. Typically one trains a neural network on an object and a background pickset. But a neural network can be trained to discriminate between three or more objects/picksets To highlight seismic objects: use the supervised method Define neural network model for training Part of the example are set apart as test set to monitor overtraining. This is done automatically by the software according to a user defined ratio
27
3 object detection: train neural network Neural network training display Left windows: training performance Right window: attribute input nodes Color scale: hotter color means more important input node Input nodes with minor impact may be removed. To do this one should redo the neural network training Further optimization after initial overtraining Error Iterations Train set Test set Error Iterations Train set Test set Error Iterations Train set Test set Stopping points for neural network training Leveling Overtraining STOP STOP?
28
3 object detection: apply to keylines/QC Apply the neural network to key lines Evaluate the result and note points that are not predicted satisfactory Decide between applying or rejecting the neural network – note that especially in more difficult problems the result will never be 100% flawless Reject: iterate previous object detection steps and fine tune attributes, pickset and neural network – see next two slides Accept: apply to volume – see third next slide
29
Training locations in the picksets that could not be correctly predicted by the neural network are stored in a separate pickset named “misclassified” (red dot). Analyzing these location may lead to better/additional attributes for increased resolutions 3 object detection: apply to keylines/QC Evaluation of chimneycube on key line Noise in chimney cube due to low S/N ratio of seismic input Really chimneys or mud slide resulting in hummocky seismic??? For many seismic object a time- slice or horizon is best for QC and interpretation. It allows to better discriminate signal from artifact True chimney
30
3 object detection: iterate object detection steps Update attributes: –design attributes that provide better discrimination in areas where the neural network does not yet perform satisfactory –fine tune parameterizations, add attributes to set, design specific purpose attributes using math, logic and volume statistics, or attributes on attributes. Update pick-sets: overlay results over the key lines and add extra picks in the problem areas to emphasize these during neural network training. Also study mis- classified picks (red dots) and remove picks where needed. Neural network training: –experiment with different sets of input attributes –train longer & consider applying a slightly over-trained network: if, after a short period of overtraining, the test set error starts decreasing again, the benefit of the extra optimization of the neural network might outweigh the error introduced by overtraining
31
3 object detection : apply to volume OpendTect supports multi-platform distributed computing. The system needs to be setup properly to use this scheme. See Administrator’s manual.
32
4 integration of results OpendTect –Saltcube –Chimney (fluid migration cube) –Fault Cube –Fracture detection using curvature, anisotropy or other attributes –Reservoir-Waste-Seal analysis (lithofacies plus fluid analysis) –Frequency (high freq attenuation, low freq shadows) –AVO/AVA fluid indicators –Well data –… Outside information –Geological interpretation and studies –Regional knowledge (kitchens, charging mechanisms) –Pressure data –Basin modeling –Horizons and interpretation (e.g. pockmarks and paleo- mud volcano’s) –Gas sniffing surveys –…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.