David Lubo-Robles*, Thang Ha, S. Lakshmivarahan, and Kurt J. Marfurt USING PROBABILISTIC NEURAL NETWORKS FOR ATTRIBUTE ASSESSMENT AND GENERAL REGRESSION David Lubo-Robles*, Thang Ha, S. Lakshmivarahan, and Kurt J. Marfurt By The University of Oklahoma
Predict missing logs in well data Seismic attribute assessment. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Probabilistic Neural Networks (PNN) is an interpolation method in which different mathematical operations can be organized in different layers forming a neural network structure (Specht 1990, Hampson et al., 2001). Compared to Multi-layer Feedforward Networks which commonly use a sigmoid function for its implementation, PNN uses a Gaussian distribution as an activation function (Specht, 1990). PNN was designed fundamentally for classification between two or more classes. However, it can be modified into the General Regression Neural Network for mapping/regression tasks (Masters, 1995). Objectives: Predict missing logs in well data Seismic attribute assessment. Predict reservoir properties using seismic attributes. 1
Hidden Layer Linear Nonlinear MLFN vs. PNN AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Multi-layer Feedforward Network (MLFN): Linear Nonlinear W1 W2 net = W1*i1 + W2*i2 𝟏 𝟏+ 𝒆 −𝒏𝒆𝒕 Activation function W13 Hidden Layer Input Layer Hidden Layer Output Layer i1 i2 i3 i4 2
Pattern Layer Summation Layer AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 𝒈 𝒙 = 𝟏 𝑵 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 Probabilistic Neural Network (PNN): 𝒆 − 𝒊 𝟏 −𝒑𝟏𝟏 𝟐 𝟐 𝝈 𝟐 𝒆 − 𝒊 𝟏 −𝒑𝟏𝟐 𝟐 𝟐 𝝈 𝟐 Pattern Layer 𝒆 − 𝒊 𝟏 −𝒑𝟏 𝟐 𝟐 𝝈 𝟐 + 𝒆 − 𝒊 𝟏 −𝒑𝟐 𝟐 𝟐 𝝈 𝟐 + … Summation Layer Input Layer Class 2 Class 1 Pattern Layer Summation Layer Output 𝒊 𝟏 p11 p12 𝒈 𝟏 𝒈 𝟐 𝒊 𝟐 𝒊 𝟑 p21 𝒊 𝟒 p22 3
AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study General Regression Neural Network (GRNN): Input Layer Pattern Layer Summation Layer Numerator Denominator Predicted Value Output p11 𝒊 𝟏 𝒗 = 𝒏=𝟏 𝑵 𝒗𝒏𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒗 4
Validation Data (input) Training Data (pattern) AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study GRNN: 𝒗 = 𝒊=𝟏 𝑵 𝒗𝒏𝒆 − 𝒏=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 Validation Data (input) Attribute #2 Attribute #1 Property (𝑣) Training Data (pattern) Depth/Time Attribute #1 Attribute #2 Predicted Property ( 𝑣 ) Depth/Time 5
AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Optimizing σ: Exhaustive Search. Cross-validation is used to train the neural network. (Hampson, 2001) Training Data Validation Data 𝟎.𝟔≤σ≤𝟕 Property (𝒗) Predicted Property ( 𝒗 ) e = 𝒗 − 𝒗 𝟐 𝑵 e --- > Error Leaving out Sigma // Error 1 2 3 4 σ1 // e1 σ2 // e2 σ3 // e3 σ4 // e4 6
Attribute assessment: Combination List Leaving out: Well #2 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Attribute assessment: Training Data Validation Data Combination List Leaving out: Well #2 Attribute (s) Sigma // Error 1 2 3 1,2 1,3 2,3 1,2,3 σ1 // e1 σ2 // e2 σ3 // e3 σ1,2 // e1,2 σ1,3 // e1,3 σ2,3 // e2,3 σ1,2,3 // e1,2,3 7
MLFN vs. PNN 8 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 8
AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study CASE STUDY: PREDICTING POROSITY USING WELL LOGS IN DIAMOND M FIELD, TX. 9
Wells: Garnet, Emerald, Topaz, Jade. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Geologic Background: Wells: Garnet, Emerald, Topaz, Jade. Top Reef Modified from Walker et al., 1995 Diamond M Field is located in Scurry County, TX., approximately 80 mi northeast of Midland, Texas. The trend is part of the Horseshoe Atoll Reef Complex, an arcuate chain of reef mounds, made of mixed types of bioclastic debris that accumulated during the Late Paleozoic in the Midland basin (Vest, 1970). The massive carbonates presented in the Atoll are separated by correlative shale beds (Davogustto, 2010) . 10
correlation coefficient AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Logs used for prediction: Gamma Ray (GR), Resistivity (RD), Density (𝜌 ), and Photoelectric Factor (PEFZ). GR vs. Porosity ρ vs. Porosity RD vs. Porosity PEFZ vs. Porosity GR (API) ρ (g/cm3 ) RD( omh-ft) PEFZ Porosity 𝒄𝒐𝒓𝒓𝒙,𝒚= 𝒄𝒐𝒗 (𝒙,𝒚) 𝝈𝒙𝝈𝒚 𝒄𝒐𝒓𝒓𝒙,𝒚 --- > Pearson’s correlation coefficient Input Attribute Correlation Gamma Ray (GR) 0.5202 Density (ρ) -0.2926 Resistivity (RD) -0.2649 Photoelectric Factor (PEFZ). -0.2424 11
Leaving out well: Garnet Leaving out well: Emerald AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Combination Lists e = 𝒗 − 𝒗 𝟐 𝑵 Leaving out well: Garnet Leaving out well: Emerald Error= 0.00125 Error=0.00122 12
Leaving out well: Topaz AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Combination Lists e = 𝒗 − 𝒗 𝟐 𝑵 Leaving out well: Jade Error= 0.00256 Leaving out well: Topaz Error=0.000977 13
MLFN vs. PNN 14 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Error=0.00122 Error= 0.00125 Error= 0.00256 Error=0.000977 14
MLFN vs. PNN Garnet: Jade: Emerald: Topaz: 15 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 𝒄𝒐𝒓𝒓𝒙,𝒚= 𝒄𝒐𝒗 (𝒙,𝒚) 𝝈𝒙𝝈𝒚 Cross-plots Garnet: (Corr = 0.7004) Jade: (Corr = 0.6998) Emerald: (Corr = -0.0881) Topaz: (Corr = 0.7020) 15
MLFN vs. PNN Future work 16 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Future work 16
PNN provides faster learning speed than MLFN. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study PNN provides faster learning speed than MLFN. PNN are insensitive to outliers. Therefore it can be more accurate than MLFN. GRRN is a powerful technique to predict missing data using known properties as input. The novel technique Exhaustive GRNN allow us to determine the best group of attributes to be used for a regression task. Computation time increases as a larger range of σ is used. Future work: We will apply our technique using seismic attributes in order to obtain a predicted rock property survey. We will expand our Exhaustive GRNN technique for classification tasks (Exhaustive PNN). Simulated annealing, and the Gradient Descent Method will be implemented to obtain different sigma for each attribute and class. 17
References Davogustto, O., 2013, Quantitative Geophysical Investigations at the Diamond M Field, Scurry County, Texas. Ph.D Dissertation, The University of Oklahoma. Hampson, D.P., Schuelke, J.S., Quirein, J.A., 2001, Use of multiattribute transforms to predict log properties from seismic data. Geophysics, 66, 220- 236. Hughes D., and N. Correll, 2016, Distributed Machine Learning in Materials that Couple Sensing, Actuation, Computation and Communication, arXiv:1606.03508v1. Masters, T., 1995, Advanced Algorithms for Neural Networks. Specht, D., 1990, Probabilistic neural networks: Neural Networks, 3, 109 118. Walker, D. A., J. Golonka, A. M. Reid, and S. T. Reid, 1991. The effects of late Paleozoic paleolatitude and paleogeography on carbonate sedimentation in the Midland Basin, Texas; Permian Basin plays: Society of Economic Paleontologists and Mineralogists, Permian Basin Chapter, Tomorrow’s Technology Today, 141-162. 18