Creating Trends for Reservoir Modelling Using ANN

Slides:



Advertisements
Similar presentations
NEURAL NETWORKS Backpropagation Algorithm
Advertisements

SEKE 2014, Hyatt Regency, Vancouver, Canada
Brian Russell #, Larry Lines #, Dan Hampson. , and Todor Todorov
Consumer Behavior Prediction using Parametric and Nonparametric Methods Elena Eneva CALD Masters Presentation 19 August 2002 Advisors: Alan Montgomery,
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
General Mining Issues a.j.m.m. (ton) weijters Overfitting Noise and Overfitting Quality of mined models (some figures are based on the ML-introduction.
Ranga Rodrigo April 5, 2014 Most of the sides are from the Matlab tutorial. 1.
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
Artificial Neural Networks
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Using Neural Networks to Predict Claim Duration in the Presence of Right Censoring and Covariates David Speights Senior Research Statistician HNC Insurance.
Classification / Regression Neural Networks 2
Statistical Tools for Solar Resource Forecasting Vivek Vijay IIT Jodhpur Date: 16/12/2013.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Neural Networks 2nd Edition Simon Haykin
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Going Crackers! Do crackers with more fat content have greater energy content? Can knowing the percentage total fat content of a cracker help us to predict.
Energy System Control with Deep Neural Networks
Machine Learning Supervised Learning Classification and Regression
Big data classification using neural network
Deep Feedforward Networks
An Empirical Comparison of Supervised Learning Algorithms
Summary of “Efficient Deep Learning for Stereo Matching”
The Gradient Descent Algorithm
Data Mining, Neural Network and Genetic Programming
Predicting Salinity in the Chesapeake Bay Using Neural Networks
Computer Science and Engineering, Seoul National University
第 3 章 神经网络.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Automatic Picking of First Arrivals
Overview of Supervised Learning
Urban Sound Classification with a Convolution Neural Network
Classification / Regression Neural Networks 2
Machine Learning Today: Reading: Maria Florina Balcan
Volume 10, Issue 6, Pages (June 2018)
Biological and Artificial Neuron
Biological and Artificial Neuron
Upscaling of 4D Seismic Data
Artificial Intelligence Chapter 3 Neural Networks
Soumya Chatterjee, Edward M. Callaway  Neuron 
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Adam M. Corrigan, Jonathan R. Chubb  Current Biology 
Neural Networks Geoff Hulten.
Pejman Mohammadi, Niko Beerenwinkel, Yaakov Benenson  Cell Systems 
Artificial Intelligence Chapter 3 Neural Networks
Bias-variance Trade-off
Probabilistic Population Codes for Bayesian Decision Making
A Switching Observer for Human Perceptual Estimation
Prediction of in-hospital mortality after ruptured abdominal aortic aneurysm repair using an artificial neural network  Eric S. Wise, MD, Kyle M. Hocking,
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Model generalization Brief summary of methods
Artificial Intelligence Chapter 3 Neural Networks
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
Machine learning overview
A Switching Observer for Human Perceptual Estimation
Artificial Intelligence Chapter 3 Neural Networks
Sharon C. Furtak, Omar J. Ahmed, Rebecca D. Burwell  Neuron 
Tiago Branco, Michael Häusser  Neuron 
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
Receptive Fields of Disparity-Tuned Simple Cells in Macaque V1
Facultad de Ingeniería, Centro de Cálculo
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Volume 24, Issue 8, Pages e6 (August 2018)
Introduction to Neural Networks
A Data Partitioning Scheme for Spatial Regression
Valerio Mante, Vincent Bonin, Matteo Carandini  Neuron 
Primer on Neural networks
Presentation transcript:

Creating Trends for Reservoir Modelling Using ANN Markus Lund Vevle*, Jon Magne Aagaard

Creating Trends for Reservoir Modelling Using ANN Trends help describe large scale reservoir characteristics There can be both a lateral and depth trend Trend analysis can be time consuming Linear regression is one way of extracting trends ANN can do regression modelling, how does it compare? Emerson Confidential

Outline Data used Algorithm Results Summary Emerson Confidential

Data used Gullfaks, offshore Norway: 151 wells in total 5 zones, with 49-118 per zone with data Average cell thickness ~2 m Amplitude seismic Maui, Taranaki basin, New Zealand: 5 wells with data (synthetic) Acoustic impedance Average cell thickness ~ 6.5 m Emerson Confidential

Algorithm ANN, structure and backpropagation algorithm in C++ SoftSign for activation function Root Mean Squared Error RMSProp for Mini Batch training Custom adaptive algorithm for epoch training. Network architecture restricted to equal sized hidden layers Training and verification data grouped according to K-fold cross validation methodology 3D output of one epoch, optimum epochs and from linear regression Emerson Confidential

Gullfaks - settings Input settings: Runtime graph showing Softsign activation function for hidden layers Linear activation function for output layer 3 hidden layers 3 neurons in each layer 10 cross validation groups Mini-Batch size 128 Blocked well data, porosity, and depth information Runtime graph showing Linear regression prediction error (green) NN prediction error (red) NN optimum prediction error (purple/violet) NN training error (black) Emerson Confidential

Gullfaks Tarbert – Well data vs trend output Results Both wells used for training Subtle difference between NN (blue) and Linear reg (black)

Gullfaks Tarbert – Well data vs trend output Results Both log tracks are the same well Left figure when well was used for training Right figure when well was left out

Gullfaks Tarbert – Well data vs trend output Results More variability in well data → potential for overfitting NN (blue) captures better the decreasing trend in the bottom (left figure)

Gullfaks Tarbert – Well data Results Apparent trend first increasing porosity, then decreasing Emerson Confidential

Gullfaks Tarbert – Well data and Linear regression Results Lower part of the trend is captured Emerson Confidential

Gullfaks Tarbert – Well data and ANN (one epoch) Results Both upper and lower part of the trend is captured Emerson Confidential

Gullfaks Tarbert – Well data and ANN (one epoch + optimum) Results Both upper and lower part of the trend is captured Emerson Confidential

Gullfaks Tarbert – Output in 3D Results Blue dots – Wells Input data centered around the middle Least knowledge about the east flank

Gullfaks Tarbert – Output in 3D Results

Gullfaks Tarbert – Cross section west-east Results Subtle differences where there is well control

Gullfaks – other zones Results Is it always necessary? Often seen prediction error from linear regression vs NN is very small ~ 1%

Gullfaks – other zones Results Sometimes you get strange effects

Gullfaks – other zones Results

Gullfaks – other zones Results

Maui Results 5 wells with synthetic data AI parameter

Maui Results AI ANN output trend

Maui Results

Maui Results

Maui Results Overall trend captured

Maui Results More details are captured. Does not try to fit the biggest outliers.

Summary and discussion ANN can be used for extracting trends With only well data Smooth output A combination of well data and spatially distributed data, for better lateral understanding More details emerge Overfitting was not observed (?) Runtime is from 10’s of seconds to a couple of minutes More data means slower Defining the network takes time Is the settings we used valid across many fields? Data is needed