Forward modelling The key to waveform tomography is the calculation of Green’s functions (the point source responses) Wide range of modelling methods available.

Slides:



Advertisements
Similar presentations
Determinant The numerical value of a square array of numbers that can be used to solve systems of equations with matrices. Second-Order Determinant (of.
Advertisements

Chapter 2 Simultaneous Linear Equations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Linear Algebra.
The Finite Element Method Defined
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Systems of Linear Equations
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Session: Computational Wave Propagation: Basic Theory Igel H., Fichtner A., Käser M., Virieux J., Seriani G., Capdeville Y., Moczo P.  The finite-difference.
Solution of linear system of equations
Lecture 11 - LU Decomposition
1cs542g-term Notes  Make-up lecture tomorrow 1-2, room 204.
Linear Transformations
Motion Analysis Slides are from RPI Registration Class.
Ch. 7: Dynamics.
MECH300H Introduction to Finite Element Methods Lecture 2 Review.
Articulated Body Dynamics The Basics Comp 768 October 23, 2007 Will Moss.
Chapter 2 Matrices Definition of a matrix.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
CE 311 K - Introduction to Computer Methods Daene C. McKinney
Table of Contents Solving Linear Systems - Elementary Row Operations A linear system of equations can be solved in a new way by using an augmented matrix.
1 Chapter 3 Matrix Algebra with MATLAB Basic matrix definitions and operations were covered in Chapter 2. We will now consider how these operations are.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Finite element method 1 Finite Elements  Basic formulation  Basis functions  Stiffness matrix  Poisson‘s equation  Regular grid  Boundary conditions.
Multivariate Linear Systems and Row Operations.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Solving Systems of Equations and Inequalities
February 21, 2000Robotics 1 Copyright Martin P. Aalund, Ph.D. Computational Considerations.
 Row and Reduced Row Echelon  Elementary Matrices.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
Some matrix stuff.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Matrices Addition & Subtraction Scalar Multiplication & Multiplication Determinants Inverses Solving Systems – 2x2 & 3x3 Cramer’s Rule.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Frequency domain Finite Difference Modelling : Examples.
Linear Systems Iterative Solutions CSE 541 Roger Crawfis.
Finite Elements: 1D acoustic wave equation
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Efficiency and Flexibility of Jagged Arrays Geir Gundersen Department of Informatics University of Bergen Norway Joint work with Trond Steihaug.
Elliptic PDEs and the Finite Difference Method
Solution of Sparse Linear Systems
Inverse DFT. Frequency to time domain Sometimes calculations are easier in the frequency domain then later convert the results back to the time domain.
What is the determinant of What is the determinant of
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
 6. Use matrices to represent and manipulate data, e.g., to represent payoffs or incidence relationships related in a network.  7. Multiply matrices.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Copyright ©2015 Pearson Education, Inc. All rights reserved.
BELL-WORK Solve the system of equations using matrices:
Do Now: Perform the indicated operation. 1.). Algebra II Elements 11.1: Matrix Operations HW: HW: p.590 (16-36 even, 37, 44, 46)
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Computational Physics (Lecture 7) PHY4061. Eigen Value Problems.
Web Mining Link Analysis Algorithms Page Rank. Ranking web pages  Web pages are not equally “important” v  Inlinks.
1 Numerical Methods Solution of Systems of Linear Equations.
Solving Systems of Equations Using Matrices
Linear Algebraic Equations and Matrices
12-1 Organizing Data Using Matrices
Fitting Curve Models to Edges
Computer Vision Lecture 16: Texture II
Lecture 11 Matrices and Linear Algebra with MATLAB
Chapter 4 Systems of Linear Equations; Matrices
A first step towards the P wave only modeling plan
Comparison of CFEM and DG methods
Chapter 4 Systems of Linear Equations; Matrices
Numerical Analysis Lecture11.
Presentation transcript:

Forward modelling The key to waveform tomography is the calculation of Green’s functions (the point source responses) Wide range of modelling methods available Very fast methods are limited (e.g., 1D, or no multiple scattering, or no turning waves, etc) Very complete methods are prohibitively expensive (e.g., full 3D methods, with anisotropy, attenuation etc) Our choice is 2D, isotropic, acoustic, two-way wave equation by frequency domain finite differences

Frequency domain finite differences You don’t always need all the frequencies for the inverse problem There are easy savings for multiple source problems You don’t always need a long time window In-elastic attenuation is easy to model Any dispersion law for attenuation / velocity is possible

Frequency domain finite differences Return to the frequency domain acoustic wave equation, including an arbitrary source term: Velocity is complex, attenuating, and dispersive:

Frequency domain finite differences Reducing (for now) to one-dimension: (imagine waves propagating on a string …) On a 1D grid, the particle displacements are stored as a list of numbers, or vector:

Frequency domain finite differences On a 1D grid, the particle displacements are stored as a list of numbers, or vector. The first space derivative is approximated by

Frequency domain finite differences An alternative way of representing the differencing is operator is a differencing stencil This generates the derivative operation at each point as we slide it over the grid, and multiply and sum the corresponding displacement values

Frequency domain finite differences The second derivative differencing stencil looks like:

Frequency domain finite differences The finite difference problem in the frequency domain: must satisfy the wave equation simultaneously at all grid points each grid point generates one equation Nx grid points generate Nx simultaneous equations most easily represented as a matrix equation

Frequency domain finite differences

Notes on the finite differencing matrix: 1.Very large (order N x N) 2.Very sparse (28 non-zero out of 100 elements) 3. Tri-diagonal structure 4. Not diagonally dominant 5. Complex valued (because of c, but later even ω will be complex valued)

Frequency domain finite differences

Matrix differencing operators for 2D Wavefield is now sampled on a 2D grid Differencing stencils become differencing stars

Matrix differencing operators for 2D Wavefield is now sampled on a 2D grid Differencing stencils become differencing stars For example, the second derivative in 2D could be

Matrix differencing operators for 2D we need to arrange the grid variables into a column vector assume (for now) the ordering is row-ordered:

Matrix differencing operators for 2D the differencing matrix for the five-point star: this is “tri-diagonal with outliers” can be solved with “band- diagonal” solvers still requires O(N 3/2 ) storage in memory if N=10 6 (1000x1000),then this is 8 Gbytes of RAM we need some tricks!

Tricks: Part 1 – tricks in operators the simple five point star differencing operator is very simple minded we cannot afford higher order operators, since these create additional outlier bands in the matrix two tricks are available: i) rotated operators, and ii) lumped and consistent mass operators

Tricks: Part 1 – tricks in operators two tricks are available: i) rotated operators, and ii) lumped and consistent mass operators A linear combination of two differencing schemes, each with different orientations, can minimize numerical anisotropy

Tricks: Part 1 – tricks in operators two tricks are available: i) rotated operators, and ii) lumped and consistent mass operators Consistent mass formulation: Lumped mass formulation: A linear combination of lumped and consistent mass schemes, can minimize numerical dispersion

Tricks: Part 1 – tricks in operators rotated operators and consistent mass matrices significantly reduce numerical dispersion (without adding outliers) Original second order operators

Tricks: Part 1 – tricks in operators rotated operators and consistent mass matrices significantly reduce numerical dispersion (without adding outliers) Rotated and consistent mass second order operators

Tricks: Part 2 – tricks in matrix solvers row ordering of grid leads to a band structure permutation of the ordering changes the matrix structure nested dissection recursively breaks the grid into linked sub-grids storage is reduced from O(N 3/2 ) to O(N log √N) for N=10 6, storage is reduced from O(10 9 ) to O(10 6 x3) (three orders of magnitude) 8 Gbyte storage requirement drops to 24 Mbyte storage (plus overhead)

Tricks: Part 2 – tricks in matrix solvers