Céline Scheidt, Jef Caers and Philippe Renard

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Analysis of High-Throughput Screening Data C371 Fall 2004.
Division of Operation and Maintenance Engineering Wear prediction of grinding mill liners Farzaneh Ahmadzadeh, Jan Lundberg
Biointelligence Laboratory, Seoul National University
« هو اللطیف » By : Atefe Malek. khatabi Spring 90.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
“Random Projections on Smooth Manifolds” -A short summary
ENVS 355 Data, data, data Models, models, models.
Dr Graeme A. Jones tools from the vision tool box Kalman Tracker - noise and filter design.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
1 Statistical Tools for Multivariate Six Sigma Dr. Neil W. Polhemus CTO & Director of Development StatPoint, Inc.
Advancing Requirements-Based Testing Models to Reduce Software Defects Craig Hale, Process Improvement Manager and Presenter Mara Brunner, B&M Lead Mike.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
Digital Media Lab 1 Data Mining Applied To Fault Detection Shinho Jeong Jaewon Shim Hyunsoo Lee {cinooco, poohut,
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
The good sides of Bayes Jeannot Trampert Utrecht University.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Céline Scheidt and Jef Caers SCRF Affiliate Meeting– April 30, 2009.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Effective drift velocity and initiation times of interplanetary type-III radio bursts Dennis K. Haggerty and Edmond C. Roelof The Johns Hopkins University.
Principal Manifolds and Probabilistic Subspaces for Visual Recognition Baback Moghaddam TPAMI, June John Galeotti Advanced Perception February 12,
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Chapter 16 Social Statistics. Chapter Outline The Origins of the Elaboration Model The Elaboration Paradigm Elaboration and Ex Post Facto Hypothesizing.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Speech Lab, ECE, State University of New York at Binghamton  Classification accuracies of neural network (left) and MXL (right) classifiers with various.
Adaptive Spatial Resampling as a McMC Method for Uncertainty Quantification in Seismic Reservoir Modeling Cheolkyun Jeong*, Tapan Mukerji, and Gregoire.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
BME 353 – BIOMEDICAL MEASUREMENTS AND INSTRUMENTATION MEASUREMENT PRINCIPLES.
Hyucksoo Park, Céline Scheidt and Jef Caers Stanford University Scenario Uncertainty from Production Data: Methodology and Case Study.
Inverse Modeling of Surface Carbon Fluxes Please read Peters et al (2007) and Explore the CarbonTracker website.
A General Approach to Sensitivity Analysis Darryl Fenwick, Streamsim Technologies Céline Scheidt, Stanford University.
1 Tom Edgar’s Contribution to Model Reduction as an introduction to Global Sensitivity Analysis Procedure Accounting for Effect of Available Experimental.
Accurate WiFi Packet Delivery Rate Estimation and Applications Owais Khan and Lili Qiu. The University of Texas at Austin 1 Infocom 2016, San Francisco.
Out of sample extension of PCA, Kernel PCA, and MDS WILSON A. FLORERO-SALINAS DAN LI MATH 285, FALL
Hybrid Bayesian Linearized Acoustic Inversion Methodology PhD in Petroleum Engineering Fernando Bordignon Introduction Seismic inversion.
On triangular norms, metric spaces and a general formulation of the discrete inverse problem or starting to think logically about uncertainty On triangular.
Modeling and Simulation CS 313
Big data classification using neural network
Multiple Random Variables and Joint Distributions
Simulation heat tracing experiment
Analyzing reservoir and overburden impacts on seismic and electromagnetic responses and the applicability of seismic/EM methods in deep water reservoir.
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Building Adaptive Basis Function with Continuous Self-Organizing Map
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Predicting Salinity in the Chesapeake Bay Using Neural Networks
Stanford Center for Reservoir Forecasting
Modeling and Simulation CS 313
A strategy for managing uncertainty
Accurate Robot Positioning using Corrective Learning
PSG College of Technology
Addy Satija and Jef Caers Department of Energy Resources Engineering
Presenter: Hajar Emami
Jincong He, Louis Durlofsky, Pallav Sarma (Chevron ETC)
A “Holy Grail” of Machine Learing
Céline Scheidt, Jef Caers and Philippe Renard
Assessing uncertainties on production forecasting based on production Profile reconstruction from a few Dynamic simulations Gaétan Bardy – PhD Student.
Hidden Markov Models Part 2: Algorithms
Problem statement Given: a set of unknown parameters
Lithography Diagnostics Based on Empirical Modeling
Goodfellow: Chapter 14 Autoencoders
XOR problem Input 2 Input 1
Céline Scheidt, Pejman Tahmasebi and Jef Caers
Generalization in deep learning
Brent Lowry & Jef Caers Stanford University, USA
CS4670: Intro to Computer Vision
Parametric Methods Berlin Chen, 2005 References:
Feature Selection Methods
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Yalchin Efendiev Texas A&M University
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Céline Scheidt, Jef Caers and Philippe Renard Uncertainty quantification in inverse problems: Purpose-Focused Inversion (PFI) vs. Model-Based Inversion (MBI) Céline Scheidt, Jef Caers and Philippe Renard 5/9/2013 SCRF 2013

Illustrative Example – Aquifer Analog TI – from Herten 5m Injection of tracer 20m Observation of tracer concentration at 3 depths Observed data 3.5 days 5/9/2013 SCRF 2013

Illustrative Example – Aquifer Analog TI – from Herten 5m Injection of tracer 20m Observation of tracer concentration at 3 depths Prediction of tracer concentration Observed data 3.5 days Prediction data 12 days Uncertainty ? 5/9/2013 SCRF 2013

Model-Based Inversion (MBI) Rejection Sampling: Creation of 30 models matching the data Prediction P10-P50-P90 Posterior (match) Prior (no match) 15,585 forward simulations to estimate uncertainty 5/9/2013 SCRF 2013

Questions Diagnostic tool: does matching the data improve the uncertainty estimation of the prediction? Is there a relationship between the observable response and the prediction response? If the data is only partially informative about the prediction, does it make sense to create Earth models that match the data? Can we avoid time-consuming inverse modeling in certain applications? Can we go further and estimate directly the uncertainty without generating new models matching the data? 5/9/2013 SCRF 2013

Prediction-Focused Inversion (PFI) Observable response (d) Set of prior models Forward modeling Relationship? Prediction response (h) 5/9/2013 SCRF 2013

PFI - Creation of a Joint Space How to analyze the relationship between the observable response and the prediction response? Creation of a (low dimensional) joint space data-prediction (d*,h*) d* and h* are obtained by using any dimensionality reduction technique d* h* Joint space (d*,h*) First component of: observable response  d* prediction response  h* One point = One model 5/9/2013 SCRF 2013

Joint Space (d*,h*) Any dimensionality reduction technique can be used to construct (d*,h*) MDS, PCA and NLPCA all show a relationship data-prediction MDS PCA NLPCA Observable response  d1* Prediction response  h1* High values of d1  High values of h1* Low values of d1*  Low values of h1* 5/9/2013 SCRF 2013

Non-Linear PCA (NLPCA) Input layer Output layer Bottleneck layer h Dimensionality reduction technique h* Train neural network to reproduce the inputs (identity mapping) Middle layer: bottleneck which enforces a reduction of the dimension of the data 5/9/2013 SCRF 2013

PFI: Use of NLPCA to construct (d*,h*) Dimensions: h  h* 100 1 Joint map (d*, h*) NLPCA Dimensions: d  d* 90 1 NLPCA 5/9/2013 SCRF 2013

PFI: Use of NLPCA to construct (d*,h*) Joint map (d*, h*) Prediction response Earlier arrival time Early arrival time Late arrival time Observable response Late arrival time Early arrival time 5/9/2013 SCRF 2013

PFI: Relationship between d* and h* Joint map (d*, h*) PDF for h1* Given the observed data d*obs, what is the uncertainty in h*? posterior for d*obs_2 posterior for d*obs_1 prior Diagnostic Tool d*obs_1 d*obs_2 Case 1: Little reduction of uncertainty in h1* Case 2: Significant reduction of uncertainty in h1* 5/9/2013 SCRF 2013

PFI: Relationship between d* and h* Joint map (d*, h*) PDF for h1* Given the observed data d*obs, what is the uncertainty in h*? posterior for d*obs_2 posterior for d*obs_1 prior Diagnostic Tool d*obs_1 d*obs_2 Comparison of prior/posterior distribution of h* indicates if inverse modeling allows for a better characterization of uncertainty 5/9/2013 SCRF 2013

PFI: Direct Estimation of Uncertainty Can we go beyond this diagnostic process? Can we sample directly the posterior distribution to get new prediction curves? NLPCA -1: h1*  New responses No additional models constructed Creation of new responses directly from sampling 5/9/2013 SCRF 2013

Example – Case 1 Observable response  d1* Prediction response  h1*, h2* NLPCA -1: h1*, h2*  New responses d*obs P10-P50-P90 5/9/2013 SCRF 2013

Example – Case 1 Observable response: New responses created by PFI Poor match of data but accurate uncertainty assessment Prediction P10-P50-P90 200 forward simulations to estimate uncertainty 5/9/2013 SCRF 2013

Example – Case 2 Observable response  d1* Prediction response  h1*, h2* NLPCA -1: h1*, h2*  New prediction P10-P50-P90 5/9/2013 SCRF 2013

Example – Case 2 Observable response: New responses created by PFI Match only at the later time but accurate uncertainty assessment Prediction P10-P50-P90 Rejection Sampling: 24,810 forward simulations to estimate uncertainty 5/9/2013 SCRF 2013

Summary Diagnostic tool indicates if inverse modeling allows for a better characterization of uncertainty It is possible to obtain accurate predictions without creating Earth models that match fully the data PFI approach has similar accuracy as rejection sampling for the tested cases PFI is designed for cases where prediction based on Earth model are needed, not the model itself The prediction responses should be known ahead of time MBI has more appeal when Earth models need to be created for multiple purposes that are not necessarily known a-priori 5/9/2013 SCRF 2013