AUTOMATON A Fuzzy Logic Automatic Picker Paul Gettings 1 UTAM 2003 Annual Meeting 1 Thermal Geophysics Research Group, University of Utah
Outline Why an automatic picking algorithm? Description of AUTOMATON algorithm Testing on field data Summary of field data testing What next?
Why an automatic picking algorithm? Tomographic inversions very useful in many settings Tomography requires determination of the first arrival travel-time of seismic energy –2-D and 3-D seismic surveys can generate thousands to millions of arrivals to be picked by hand Manual picking is the rate-limiting step
AUTOMATON Description of algorithm Try to replicate how a human picks arrivals Better to not choose any pick than choose a “bad” pick! Basic assumptions: –Waveforms of the first arrival slowly vary across a survey –Some information on apparent velocities is known –Geometry of the survey is known
Algorithm (2) Pick an arrival time based on 3 factors: –Waveform correlation (r) –Fit amplitude vs. predicted amplitude ( A) –Fit time vs. predicted time ( t) Start with a human pick to compute empirical wavelet –Wavelet scaled to [-1,1] –User defines length of wavelet (in ms)
Algorithm (3) For each trace: –Window trace data by a max & min velocity –For each time in window: Linear least-squares fit wavelet to trace data starting at each time –Keep fit with highest correlation coefficient Compute fuzzy membership function (M(r, t, A)) for best fit If M>threshold; keep the pick
Algorithm (4) Fuzzy membership function defined as: M [0,1] Need all 3 terms of M to insure a good pick Weights and constants (C, D) set by user
Algorithm Testing Algorithm works flawlessly on synthetic data, as expected –Waveform clean for all traces –Velocities well-known Test on field data: – m spacing –250 s time sample interval –Single shot point for testing and simplicity
Field Data & Picks
Pick Comparison channel 64 (32 m) as known Seed Pick
Pick Comparison channel 144 (72 m) as known Seed Pick
Summary of Testing Fuzzy logic algorithm works flawlessly on synthetic data Algorithm needs oversight with real data as currently implemented –Waveform changes –Poor velocity model to start Current implementation fast - thousands of channels picked/sec
What next? Interactive interface –Graphical, with real-time plotting –Allow user to break data set into pieces with manual picks, velocities, etc. Adaptive waveforms: rebuild wavelet at each good pick Try multiple passes to find a good pick –Re-window trace data each pass Use cross-correlation between adjacent traces Different trace fitting schemes: neural nets?
This project sponsored by UTAM Latest code, revised paper, and this talk available on the web at: