Presentation is loading. Please wait.

Presentation is loading. Please wait.

Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University.

Similar presentations


Presentation on theme: "Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University."— Presentation transcript:

1 Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University Hsinchu, Taiwan kyhuang@cs.nctu.edu.tw and Liang-Chi Shen Department of Electrical & Computer Engineering University of Houston Houston, TX

2 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

3 Real well log data: Apparent conductivity vs. depth

4 Inversion to get the true layer effect?

5 Review of well log data inversion Lin, Gianzero, and Strickland used the least squares technique, 1984. Dyos used maximum entropy, 1987. Martin, Chen, Hagiwara, Strickland, Gianzero, and Hagan used 2-layer neural network, 2001. Goswami, Mydur, Wu, and Hwliot used a robust technique, 2004. Huang, Shen, and Chen used higher order perceptron, IEEE IGARSS, 2008.

6 Review of RBF Powell, 1985, proposed RBF for multivariate interpolation. Hush and Horne, 1993, used RBF network for functional approximation. Haykin, 2009, summarized RBF in Neural Networks book.

7 Conventional two-layer RBF Hush and Horne, 1993

8 Training in conventional two-layer RBF

9 Properties of RBF RBF is a supervised training model. The 1 st layer used the K-means clustering algorithm to determine the K nodes. The activation function of the 2 nd layer was linear. f(s)=s. f ’(s)=1. The 2 nd layer used the Widrow-Hoff learning rule.

10 Output of the 1 st layer of RBF

11 Training in the 2 nd layer

12 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

13 Modification of two-layer RBF

14 Training in modified two-layer RBF

15 Optimal number of nodes in the 1 st layer

16 Perceptron training in the 2 nd layer

17 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

18 Proposed three-layer RBF

19 Training in proposed three-layer RBF

20 Generalized delta learning rule ( Rumelhart, Hinton, and Williams, 1986 )

21 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

22 Experiments: System flow in simulation Apparent resistivity (Ra ) Apparent conductivity (Ca) True formation resistivity (Rt) Radial basis function network (RBF) Scale Ca to 0~1 (Ca’) Desired true formation conductivity (Ct’’) Re-scale Ct’ to Ct True formation conductivity (Ct’)

23 Experiments: on simulated well log data In the simulation, there are 31 well logs. Professor Shen at University of Houston worked on theoretical calculation. Each well log has the apparent conductivity (Ca) as the input, and the true formation conductivity (Ct) as the desired output. Well logs #1~#25 are for training. Well logs #26~#31 are for testing.

24 Simulated well log data: examples Simulated well log data #7

25 Simulated well log data #13

26 Simulated well log data #26

27 What is the input data length? Output length? 200 records on each well log. 25 well logs for training. 6 well logs for testing. How many inputs to the RBF is the best? Cut 200 records into 1, 2, 4, 5, 10, 20, 40, 50, 100, and 200 data, segment by segment, to test the best input data length to RBF model. For inversion, the output data length is equal to the input data length in the RBF model. In testing, input n data to the RBF model to get the n output data, then input n data of the next segment to get the next n output data, repeatedly.

28 Example of input data length at well log #13 If each segment (pattern vector) has 10 data, 200 records of each well log are cut into 20 segments (pattern vectors). If each segment (pattern vector) has 10 data, 200 records of each well log are cut into 20 segments (pattern vectors).

29 Input data length and # of training patterns from 25 training well logs Input data length Number of training patterns 15000 22500 41250 51000 10500 20250 40125 50100 50 20025

30 Optimal cluster number of training patterns Example: for input data length 10 PFS vs. K. For input N=10, the optimal cluster number K is 27.

31 Optimal cluster number of training patterns in 10 cases Set up 10 two-layer RBF models. Compare the testing errors of 10 models to select the optimal RBF model. N features Training patterns K clusters 150002426 2250014 412507 5100044 1050027 202507 401252 501002 502 200254

32 Experiment: Training in modified two-layer RBF

33 Parameter setting in the experiment

34 Testing errors at 2-layer RBF models in simulation 10-27-10 RBF model gets the smallest error in testing. Network size Number of training patterns MAE at 20,000 iterations Average MAE of 6 well log data inversion Training CPU Time (H:M:S) 1-2426-15000 0.0186450.12311901:43:15 2-14-22500 0.0458080.07187600:22:25 4-7-41250 0.0490060.06982300:11:18 5-44-51000 0.0327160.05875400:11:07 10-27-10500 0.0313940.04800300:05:26 20-7-20250 0.0487670.07376800:03:22 40-2-40125 0.1642470.17452000:01:28 50-2-50100 0.1606580.16519000:01:45 100-2-10050 0.1908260.18558700:01:33 200-4-20025 0.1911590.27774100:01:13

35 Training result: error vs. iteration using 10-27-10 two-layer RBF

36 Inversion testing Inversion testing using 10-27-10 two-layer RBF Inverted Ct of log #26 by network 10-27-10 (MAE= 0.051753). Inverted Ct of log #27 by network 10-27-10 (MAE= 0.055537).

37 Inverted Ct of log #28 by network 10-27-10 (MAE= 0.041952). Inverted Ct of log #29 by network 10-27-10 (MAE= 0.040859).

38 Inverted Ct of log #30 by network 10-27-10 (MAE= 0.047587). Inverted Ct of log #31 by network 10-27-10 (MAE= 0.050294).

39 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

40 Experiment: Training in modified three-layer RBF. Hidden node number?

41 Determine the number of hidden nodes in the 2-layer perceptron

42 Hidden node number and optimal 3-layer RBF

43 Training result: error vs. iteration using 10-27-9-10 three-layer RBF

44 Inversion testing Inversion testing using 10-27-9-10 three-layer RBF Inverted Ct of log 26 by network 10-27-9-10 (MAE= 0.041526) Inverted Ct of log 27 by network 10-27-9-10 (MAE= 0.059158)

45 Inverted Ct of log 28 by network 10-27-9-10 (MAE= 0.046744) Inverted Ct of log 29 by network 10-27-9-10 (MAE= 0.043017)

46 Inverted Ct of log 30 by network 10-27-9-10 (MAE= 0.046546) Inverted Ct of log 31 by network 10-27-9-10 (MAE= 0.042763)

47 Testing error of each well log using 10-27-9-10 three-layer RBF model Average error: 0.046625 Well Log Data MAE of well log data inversion #26 0.041526 #27 0.059158 #28 0.046744 #29 0.043017 #30 0.046546 #31 0.042763

48 Average testing error of each three-layer RBF model in simulation Experiments using RBFs with different number of hidden nodes. 10-27-9-10 get the smallest average error in testing. So it is selected to the real data application. Network size Number of training patterns Error MAE at 20,000 iterations Training CPU Time (H:M:S) Average Error MAE of 6 Well log data inversion 10-27-7-10500 0.01723100:31:020.050430 10-27-8-10500 0.01771400:29:400.050313 10-27-9-10500 0.01552300:29:450.046625 10-27-10-10500 0.01598100:30:370.048452 10-27-11-10500 0.01984800:29:590.048173 10-27-12-10500 0.02156400:29:270.053976

49 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

50 Real well log data: Apparent conductivity vs. depth

51 Application to real well log data inversion Real well log data: Depth from 5,577.5 to 6,772 feet. Sampling interval 0.5 feet. Total 2,290 data in one well log. Select 10-27-9-10 optimal RBF model for real data inversion. After convergence in training, input 10 real data to the RBF model to get the 10 output data, then input 10 data of the next segment to get the next 10 output data, repeatedly.

52 Inversion of real well log data: Inverted Ct vs. depth

53 Outline Introduction Proposed Methods Modification of two-layer RBF Proposed three-layer RBF Experiments Simulation using two-layer RBF Simulation using three-layer RBF Application to real well log data inversion Conclusions and Discussion

54 We have the modification of 2-layer RBF and propose 3- layer RBF for well log data inversion. 3-layer RBF has better inversion than 2-layer RBF because more layers can do more nonlinear mapping. In the simulation, the optimal 3-layer model is 10-27-9-10. It can get the smallest average mean absolute error in the testing. The trained 10-27-9-10 RBF model is applied to the real well log data inversion. The result is acceptable and good. It shows that the RBF model can work on well log data inversion. Errors are different at experiments because initial weights are different in the network. But the order or percentage of errors can be for comparison in the RBF performance.

55 Thank you for your attention.


Download ppt "Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University."

Similar presentations


Ads by Google