LBNLGXTBR FY2001 Oil and Gas Recovery Technology Review Meeting Diagnostic and Imaging High Speed 3D Hybrid Elastic Seismic Modeling Lawrence Berkeley.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing An Adaptive Numerical Method for Multi- Scale Problems Arising in Phase-field Modelling Peter.
Advertisements

GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
Thoughts on Shared Caches Jeff Odom University of Maryland.
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Numerical methods in the Earth Sciences: seismic wave propagation Heiner Igel, LMU Munich III The latest developments, outlook Grenoble Valley Benchmark.
Session: Computational Wave Propagation: Basic Theory Igel H., Fichtner A., Käser M., Virieux J., Seriani G., Capdeville Y., Moczo P.  The finite-difference.
Enhanced Oil Recovery using Coupled Electromagnetics and Flow Modelling INTRODUCTION Enhanced Oil Recovery (EOR) is a process in which gas or fluid is.
A New Approach to Joint Imaging of Electromagnetic and Seismic Wavefields International Symposium on Geophysical Imaging with Localized Waves Sanya, Hainin.
Parallel Mesh Refinement with Optimal Load Balancing Jean-Francois Remacle, Joseph E. Flaherty and Mark. S. Shephard Scientific Computation Research Center.
Network and Grid Computing –Modeling, Algorithms, and Software Mo Mu Joint work with Xiao Hong Zhu, Falcon Siu.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
Non-hydrostatic algorithm and dynamics in ROMS Yuliya Kanarska, Alexander Shchepetkin, Alexander Shchepetkin, James C. McWilliams, IGPP, UCLA.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Next Generation 3D Imaging Models Bill Abriel Bee Bednar Biondo Biondi Arthur Cheng Stewart Levin.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Parallel Adaptive Mesh Refinement Combined With Multigrid for a Poisson Equation CRTI RD Project Review Meeting Canadian Meteorological Centre August.
Version 1.3, Feb. 2013Time stepping 1/20WW Winter School 2013 Time stepping Hendrik Tolman The WAVEWATCH III Team + friends Marine Modeling and Analysis.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Finite Differences Finite Difference Approximations  Simple geophysical partial differential equations  Finite differences - definitions  Finite-difference.
Hybrid WENO-FD and RKDG Method for Hyperbolic Conservation Laws
Institute for Mathematical Modeling RAS 1 Dynamic load balancing. Overview. Simulation of combustion problems using multiprocessor computer systems For.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Proof of concept studies for surface-based mechanical property reconstruction 1. University of Canterbury, Christchurch, NZ 2. Eastman Kodak Company, Rochester,
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Discontinuous Galerkin Methods and Strand Mesh Generation
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Frequency domain Finite Difference Modelling : Examples.
Strategies for Solving Large-Scale Optimization Problems Judith Hill Sandia National Laboratories October 23, 2007 Modeling and High-Performance Computing.
Discontinuous Galerkin Methods for Solving Euler Equations Andrey Andreyev Advisor: James Baeder Mid.
M U N -March 10, Phil Bording1 Computer Engineering of Wave Machines for Seismic Modeling and Seismic Migration R. Phillip Bording March 10, 2005.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Interactive Computational Sciences Laboratory Clarence O. E. Burg Assistant Professor of Mathematics University of Central Arkansas Science Museum of Minnesota.
Forward modelling The key to waveform tomography is the calculation of Green’s functions (the point source responses) Wide range of modelling methods available.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
Speeding Up Seismic Imaging with Reduced-Order Modeling Victor Pereyra Energy Resources Engineering Stanford University Scientific Computing and Matrix.
ATmospheric, Meteorological, and Environmental Technologies RAMS Parallel Processing Techniques.
Domain Decomposition in High-Level Parallelizaton of PDE codes Xing Cai University of Oslo.
Computation of the complete acoustic field with Finite-Differences algorithms. Adan Garriga Carlos Spa Vicente López Forum Acusticum Budapest31/08/2005.
Present / introduce / motivate After Introduction to the topic
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
Requirements of High Accuracy Computing 1) Adopted numerical scheme must resolve all physical time and length scales. 2) Numerical scheme must be neutrally.
Backprojection and Synthetic Aperture Radar Processing on a HHPC Albert Conti, Ben Cordes, Prof. Miriam Leeser, Prof. Eric Miller
On the Performance of PC Clusters in Solving Partial Differential Equations Xing Cai Åsmund Ødegård Department of Informatics University of Oslo Norway.
ENE 490 Applied Communication Systems
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Wave-Equation Waveform Inversion for Crosswell Data M. Zhou and Yue Wang Geology and Geophysics Department University of Utah.
University of Texas at Arlington Scheduling and Load Balancing on the NASA Information Power Grid Sajal K. Das, Shailendra Kumar, Manish Arora Department.
AIAA th AIAA/ISSMO Symposium on MAO, 09/05/2002, Atlanta, GA 0 AIAA OBSERVATIONS ON CFD SIMULATION UNCERTAINTIES Serhat Hosder, Bernard.
J. Diaz, D. Kolukhin, V. Lisitsa, V. Tcheverda Coupling of the Discontinuous Galerkin and Finite Difference techniques to simulate seismic waves in the.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Evolution at CERN E. Da Riva1 CFD team supports CERN development 19 May 2011.
Fang Liu and Arthur Weglein Houston, Texas May 12th, 2006
Time and Depth Imaging Algorithms in a Hardware Accelerator Paradigm
Reporter: Prudence Chien
Parallel Plasma Equilibrium Reconstruction Using GPU
6/11/2018 Finding Oil with Cells: Seismic Imaging Using a Cluster of Cell Processors Michael Perrone IBM Master Inventor Mgr, Multicore Computing, IBM.
Programming Models for SimMillennium
Convergence in Computational Science
High-accuracy PDE Method for Financial Derivative Pricing Shan Zhao and G. W. Wei Department of Computational Science National University of Singapore,
17-Nov-18 Parallel 2D and 3D Acoustic Modeling Application for hybrid computing platform of PARAM Yuva II Abhishek Srivastava, Ashutosh Londhe*, Richa.
AIAA OBSERVATIONS ON CFD SIMULATION UNCERTAINITIES
AIAA OBSERVATIONS ON CFD SIMULATION UNCERTAINTIES
AIAA OBSERVATIONS ON CFD SIMULATION UNCERTAINTIES
Hybrid Programming with OpenMP and MPI
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Cluster Computers.
Presentation transcript:

LBNLGXTBR FY2001 Oil and Gas Recovery Technology Review Meeting Diagnostic and Imaging High Speed 3D Hybrid Elastic Seismic Modeling Lawrence Berkeley National Laboratory, GX Technology, Burlington Resources Contacts: Valeri Korneev , Mike Hoversten ,

LBNLGXTBR Why do we need 3D elastic modeling Heterogeneous 3D media, complex topography, 3- component data, strong converted S- waves Survey design Hypothesis testing AVO evaluation Wave field interpretation Synthetic data sets for depth migration and full waveform inversion testing “Engine” for inversion It is cheaper and faster than real data acquisition

LBNLGXTBR Society of Exploration Geophysicists and European Association of Exploration Geophysicists 3D modeling project for seismic imaging testing Gflop hours. Acoustic. Started in Still not completed. This project needs larger model and elastic code.

LBNLGXTBR

LBNLGXTBR Deep water Gulf of Mexico Regional Seismic Line Sub-salt structures can not be seen using acoustic inversion. Elastic propagator is needed to image details.

LBNLGXTBR Where 3D modeling stands today Industry primarily uses acoustic uniform grid Need of “smart” users who are experts in the method Massive parallel computing is expensive and “slow ” Model building is a problem Modeling results are difficult to interpret

LBNLGXTBR Requirements Full elastic modeling Attenuation Anisotropy Topography Effectively exploit computational resources High fidelity numerical modeling Hybrid methodology: Ray-tracing coupled with finite difference Local resolution algorithms Massively parallel super computers and clusters

LBNLGXTBR What is our goal? To build a 3D elastic modeling software tool capable to compute realistic (10 km * 10 km *4 km) models at seismic exploration frequencies (up to 100 Hz) on local networks at reasonable time (overnight) by any geophysical software user (with no special method-oriented training).

LBNLGXTBR

LBNLGXTBR

LBNLGXTBR

LBNLGXTBR Issues of 3D modeling performance Accuracy. Improves by using higher order differencing and finer gridding. Model size. No less then 5 grid points per shortest wavelength. Acoustic code requires 3.5 *N cells, where N = Nx*Ny*Nz number of cells for model parameters sampling. Elastic code needs 6 times more cells. CPU time. Acoustic code requires 5*K operations per grid cell. Elastic code requires 5 times more operations per grid cell, where K= 6*m, m - an order of differential operator. Stability. Requires integration time step Optimization. Avoid over sampling and too small time integration steps. Use parallel computing. Avoid computing in undisturbed cells. Numerical artifacts. Contrast contacts. Step sampling noise of dipping interfaces. Boundary reflections. Liquid-elastic interfaces. 0.5

LBNLGXTBR 3D hybrid elastic seismic modeling Parallel based upon overlapping subdomain decomposition variable order 3D finite difference code. Grid spacing depends on model parameters to provide local optimal computational regime. Wave propagation in the water will be computed by acoustic code. Contrast dipping interfaces will be handled with Local Boundary Conditions approach. Computation is performed for subdomains with non-zero wave field only. Option of conditional computation restarting at any given lapse time.

LBNLGXTBR

LBNLGXTBR FY2000 Results Stair step gridding problem resolved Nonuniform grid algorithm tested New stable topography algorithm tested Parallel interface library applied to 2D 4-th order in time scheme tested GXT ray tracer installed

LBNLGXTBR BoxLib Foundation Library –Domain specific library: support PDE solvers on structured, hierarchical adaptive meshes (AMR) –Support for heterogeneous workloads –BoxLib based programs run on serial, distributed memory and shared memory supercomputers and on SMP clusters –Parallel implementation MPI standard based, ensuring portability Dynamic load balance achieved using dynamic-programming approach to distribute sub grids –Hybrid C++ / Fortran programming model C++ flow control, memory management and I/O Fortran numerical kernels Parallel Software Infrastructure

LBNLGXTBR Discretization Methodology Improved finite difference schemes –Fourth order in space and time –Reduce computational times and memory requirements by more than an order of magnitude for realistic geologic models –Improved parallel performance by reducing communication to computation ratio Absorbing boundary conditions –Use non-local pseudo-differential operators to represent one-way wave propagation –Expand with PADE approximations to obtain local representation –Add graded local damping to minimize evanescent reflections

LBNLGXTBR Grating effect reduction by spatial filtering No correctionLinear gradient correction

LBNLGXTBR Nonuniform grid FD computations Savings factors Resource 2D tested 3D projected Memory CPU time Two half-spaces with 100% velocity contrast No velocity contrast test

LBNLGXTBR Numerically stable steep topography modeling

LBNLGXTBR LBNL has a fast and accurate wave propagation algorithm for moderately heterogeneous elastic media: We are going to apply it

LBNLGXTBR Hybrid Ray Tracing and FD approach speeds up computations up to 10 times Fast code through slow simple media FD code through complex media

LBNLGXTBR LBNL PC cluster Vendor - SGI 8 dual Pentium III CPU 800 MHz 512 SDRAM per node Myrinet LAN - fast net card Lynux based Price $60K Completion by the end of 2000

LBNLGXTBR Year 2001 efforts Requested budget - 250K Parallel 3D elastic code Nonuniform grid for 3D 4-th order in time scheme for 3D Hybrid (ray tracing + FD) algorithm Topography for 3D

LBNLGXTBR BoxLib Parallelism Hybrid C++/Fortran Programming environment. Library supports parallel PDE solvers on rectangular meshes. MPI portability: distributed and shared memory supercomputers, clusters of engineering workstations.

LBNLGXTBR Discretization Fourth order accuracy in space and time based on a modified equation approach. 2D: fourth order scheme gives 2 times the performance of conventional second order schemes. 3D: fourth order 4 times as efficient.

LBNLGXTBR Parallel Performance Wall clock run time Number of CPUs 4-th order scheme performs better as number of CPUs increases.