Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002.

Slides:



Advertisements
Similar presentations
Numerical Integration
Advertisements

Polynomial Approximation PSCI 702 October 05, 2005.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
Computational Methods in Physics PHYS 3437
Eigen-analysis and the Power Method
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Numerical Integration of Functions
NUMERICAL DIFFERENTIATION AND INTEGRATION
Chapter 7 Numerical Differentiation and Integration
Today’s class Romberg integration Gauss quadrature Numerical Methods
Lecture 18 - Numerical Differentiation
Numerical Integration
CE33500 – Computational Methods in Civil Engineering Differentiation Provided by : Shahab Afshari
Data mining and statistical learning - lecture 6
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
1 Curve-Fitting Spline Interpolation. 2 Curve Fitting Regression Linear Regression Polynomial Regression Multiple Linear Regression Non-linear Regression.
ES 240: Scientific and Engineering Computation. InterpolationPolynomial  Definition –a function f(x) that can be written as a finite series of power functions.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 11 - LU Decomposition
Curve-Fitting Regression
MANE 4240 & CIVL 4240 Introduction to Finite Elements Numerical Integration in 1D Prof. Suvranu De.
MECH300H Introduction to Finite Element Methods Lecture 2 Review.
Curve-Fitting Polynomial Interpolation
Revision.
Ordinary least squares regression (OLS)
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Chapter 6 Numerical Interpolation
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
NUMERICAL DIFFERENTIATION The derivative of f (x) at x 0 is: An approximation to this is: for small values of h. Forward Difference Formula.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Chapter 4, Integration of Functions. Open and Closed Formulas x 1 =a x 2 x 3 x 4 x 5 =b Closed formula uses end points, e.g., Open formulas - use interior.
Chapter 4 Numerical Differentiation and Integration 1/16 Given x 0, approximate f ’(x 0 ). h xfhxf xf h )()( lim)('    x0x0 x1x1 h x1x1 x0x0.
Lecture 19 - Numerical Integration CVEN 302 July 22, 2002.
Introduction to Numerical Analysis Using MATLAB
Interpolation. Interpolation is important concept in numerical analysis. Quite often functions may not be available explicitly but only the values of.
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Review Taylor Series and Error Analysis Roots of Equations
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Elliptic PDEs and the Finite Difference Method
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 22.
Quadrature rules 1Michael Sokolov / Numerical Methods for Chemical Engineers / Numerical Quadrature Michael Sokolov ETH Zurich, Institut für Chemie- und.
Chap. 11 Numerical Differentiation and Integration
Lecture 22 Numerical Analysis. Chapter 5 Interpolation.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 10 - Nonlinear gradient techniques and LU Decomposition CVEN 302 June 24, 2002.
Lecture 17 - Approximation Methods CVEN 302 July 17, 2002.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
The purpose of Chapter 5 is to develop the basic principles of numerical integration Usefule Words integrate, integral 积分(的), integration 积分(法), quadrature.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Quadrature – Concepts (numerical integration) Don Allen.
MA2213 Lecture 12 REVIEW. 1.1 on page Compute Compute quadratic Taylor polynomials for 12. Compute where g is the function in problems 11 and 12.
NUMERICAL DIFFERENTIATION Forward Difference Formula
Advanced Numerical Methods (S. A. Sahu) Code: AMC 51151
Chapter 22.
Numerical Analysis Lecture 45.
Numerical differentiation
Numerical Analysis Lecture14.
MATH-321 In One Slide MATH-321 & MATLAB Command.
Numerical Analysis Lecture10.
Numerical Analysis Lecture11.
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002

Lecture’s Goals Chapter 6 - LU Decomposition Chapter 7 - Eigen-analysis Chapter 8 - Interpolation Chapter 9 - Approximation Chapter 11 - Numerical Differentiation and Integration

Chapter 6 LU Decomposition of Matrices

LU Decomposition A modification of the elimination method, called the LU decomposition. The technique will rewrite the matrix as the product of two matrices. A = LU

LU Decomposition There are variation of the technique using different methods. –Crout’s reduction (U has ones on the diagonal). –Doolittle’s method( L has ones on the diagonal). –Cholesky’s method ( The diagonal terms are the same value for the L and U matrices).

LU Decomposition Solving Using the LU decomposition [A]{x} = [L][U]{x} = [L]{[U]{x}} = {b} Solve [L]{y} = {b} and then solve [U]{x} = {y}

LU Decomposition The matrices are represented by

LU Decomposition (Crout’s reduction) Matrix decomposition

LU Decomposition (Doolittle’s Method) Matrix decomposition

Cholesky’s Method Matrix is decomposed into: where, l ii = u ii

Tridiagonal Matrix For a banded matrix using Doolittle’s method, i.e. a tridiagonal matrix.

Pivoting of the LU Decomposition Still need pivoting in LU decomposition Messes up order of [L] What to do? Need to pivot both [L] and a permutation matrix [P] Initialize [P] as identity matrix and pivot when [A] is pivoted  Also pivot [L]

Pivoting of the LU Decomposition Permutation matrix [ P ] - permutation of identity matrix [ I ] Permutation matrix performs “bookkeeping” associated with the row exchanges Permuted matrix [ P ] [ A ] LU factorization of the permuted matrix [ P ] [ A ] = [ L ] [ U ]

Chapter 7 Eigen-analysis

Matrix eigenvalues arise from discrete models of physical systems Discrete models –Finite number of degrees of freedom result in a finite number of eigenvalues and eigenvectors. Eigen-Analysis

Eigenvalues Computing eigenvalues of a matrix is important in numerous applications. –In numerical analysis, the convergence of an iterative sequence involving matrices is determined by the size of the eigenvalues of the iterative matrix. –In dynamic systems, the eigenvalues indicate whether a system is oscillatory, stable (decaying oscillations) or unstable(growing oscillation). –Oscillator system, the eigenvalues of differential equations or the coefficient matrix of a finite element model are directly related to natural frequencies of the system. –Regression analysis, eigenvectors of correlation matrix are used to select new predictor variables that are linear combinations of the original predictor variables.

General Form of the Equations The general form of the equations

Power Method The basic computation of the power method is summarized as The equation can be written as:

Power Method The basic computation of the power method is summarized as The equation can be written as:

Shift Method It is possible to obtain another eigenvalue from the set equations by using a technique known as shifting the matrix. Subtract the a vector from each side, thereby changing the maximum eigenvalue

Shift Method The eigenvalue, s, is the maximum value of the matrix A. The matrix is rewritten in a form. Use the Power Method to obtain the largest eigenvalue of [B].

Inverse Power Method The inverse method is similar to the power method, except that it finds the smallest eigenvalue. Using the following technique.

Inverse Power Method The algorithm is the same as the Power method and the “eigenvector” is not the eigenvector for the smallest eigenvalue. To obtain the smallest eigenvalue from the power method.

Accelerated Power Method The Power method can be accelerated by using the Rayleigh Quotient instead of the largest w k value. The Rayeigh Quotient is defined as:

Accelerated Power Method The values of the next z term is defined as: The Power method is adapted to use the new value.

QR Factorization Another form of factorization A = Q*R Produces an orthogonal matrix (“Q”) and a right upper triangular matrix (“R”) Orthogonal matrix - inverse is transpose

Why do we care? We can use Q and R to find eigenvalues 1. Get Q and R (A = Q*R) 2. Let A = R*Q 3. Diagonal elements of A are eigenvalue approximations 4. Iterate until converged QR Factorization Note: QR eigenvalue method gives all eigenvalues simultaneously, not just the dominant

Householder Matrix Householder matrix reduces z k+1,…,z n to zero

Householder Matrix To achieve the above operation, v must be a linear combination of x and e k

Chapter 8 Interpolation

Interpolation Methods Interpolation uses the data to approximate a function, which will fit all of the data points. All of the data is used to approximate the values of the function inside the bounds of the data. We will look at polynomial and rational function interpolation of the data and piece-wise interpolation of the data.

Polynomial Interpolation Methods Lagrange Interpolation Polynomial - a straightforward, but computational awkward way to construct an interpolating polynomial. Newton Interpolation Polynomial - there is no difference between the Newton and Lagrange results. The difference between the two is the approach to obtaining the coefficients.

Hermite Interpolation The Advantages The segments of the piecewise Hermite polynomial have a continuous first derivative at support points. The shape of the function being interpolated is better matched, because the tangent of this function and tangent of Hermite polynomial agree at the support points.

Rational Function Interpolation Polynomial are not always the best match of data. A rational function can be used to represent the steps. A rational function is a ratio of two polynomials. This is useful when you deal with fitting imaginary functions z=x + iy. The Bulirsch-Stoer algorithm creates a function where the numerator is of the same order as the denominator or 1 less.

Rational Function Interpolation The Rational Function interpolation are required for the location and function value need to be known. or

Cubic Spline Interpolation Hermite Polynomials produce a smooth interpolation, they have a disadvantage that the slope of the input function must be specified at each breakpoint. Cubic Spline interpolation use only the data points used to maintaining the desired smoothness of the function and is piecewise continuous.

Chapter 9 Approximation

Approximation Methods Interpolation matches the data points exactly. In case of experimental data, this assumption is not often true. Approximation - we want to consider the curve that will fit the data with the smallest “error”. What is the difference between approximation and interpolation?

Least Square Fit Approximations The solution is the minimization of the sum of squares. This will give a least square solution. This is known as the Maximum Likelihood Principle.

Least Square Error How do you minimize the error? Take the derivative with the coefficients and set it equal to zero.

Least Square Coefficients for Quadratic Fit The equations can be written as:

Polynomial Least Square The technique can be used to all forms of polynomials of the form:

Polynomial Least Square Solving large sets of linear equations are not a simple task. They can have the undesirable property known as ill-conditioning. The results of this method is that round-off errors in solving for the coefficients cause unusually large errors in the curve fits.

Polynomial Least Square Or measure of the variance of the problem Where, n is the degree polynomial and N is the number of elements and Y k are the data points and,

Nonlinear Least Squared Approximation Method How would you handle a problem, which is modeled as:

Nonlinear Least Squared Approximation Method Take the natural log of the equations and

Continuous Least Square Functions Instead of modeling a known complex function over a region, we would like to model the values with a simple polynomial. This technique uses a least squares over a continuous region. The coefficients of the polynomial can be determined using same technique that was used in discrete method.

Continuous Least Square Functions The technique minimizes the error of the function uses an integral. where

Continuous Least Square Functions Take the derivative of the error with respect to the coefficients and set it equal to zero. And compute the components of the coefficient matrix. The right hand side of the matrix will be the function we are modeling times a x value.

Continuous Least Square Function There are other forms of equations, which can be used to represent continuous functions. Examples of these functions are Legrendre Polynomials Tchebyshev Polynomials Cosines and sines.

Legendre Polynomial The Legendre polynomials are a set of orthogonal functions, which can be used to represent a function as components of a function.

Legendre Polynomial These function are orthogonal over a range [ -1, 1 ]. This range can be scaled to fit the function. The orthogonal functions are defined as:

Continuous Functions Other forms of orthogonal functions are sines and cosines, which are used in Fourier approximation. The advantages for the sines and cosines are that they can model large time scales. You will need to clip the ends of the series so that it will have zeros at the ends.

Chapter 11 Numerical Differentiation and Integration

Numerical Differentiation A Taylor series or Lagrange interpolation of points can be used to find the derivatives. The Taylor series expansion is defined as:

Numerical Differentiation Assume that the data points are equally spaced and the equations can be written as:

Differential Error Notice that the errors of the forward and backward 1 st derivative of the equations have an error of the order of O(  x) and the central differentiation has an error of order O(  x 2 ). The central difference has an better accuracy and lower error that the others. This can be improved by using more terms to model the first derivative.

Higher Order Derivatives To find higher derivatives, use the Taylor series expansions of term and eliminate the terms from the sum of equations. To improve the error in the problem add additional terms.

Lagrange Differentiation Another form of differentiation is to use the Lagrange interpolation between three points. The values can be determine for unevenly spaced points. Given:

Lagrange Differentiation Differentiate the Lagrange interpolation Assume a constant spacing

Richardson Extrapolation This technique uses the concept of variable grid sizes to reduce the error. The technique uses a simple method for eliminating the error. Consider a second order central difference technique. Write the equation in the form:

Richardson Extrapolation The central difference can be defined as Write the equation with different grid sizes

Richardson Extrapolation The equation can be rewritten as: It can be rewritten in the form

Richardson Extrapolation The technique can be extrapolated to include the higher order error elimination by using a finer grid.

Trapezoid Rule Integrate to obtain the rule

Simpson’s 1/3-Rule Integrate the Lagrange interpolation

Simpson’s 3/8-Rule

Midpoint Rule Newton-Cotes Open Formula ab x f(x) xmxm

Composite Trapezoid Rule x0x0 x1x1 x f(x) x2x2 hhx3x3 hhx4x4

Composite Simpson’s Rule Multiple applications of Simpson’s rule

Richardson Extrapolation Use trapezoidal rule as an example –subintervals: n = 2 j = 1, 2, 4, 8, 16, ….

Richardson Extrapolation For trapezoidal rule

Richardson Extrapolation k th level of extrapolation

Romberg Integration Accelerated Trapezoid Rule

Gaussian Quadratures Newton-Cotes Formulae –use evenly-spaced functional values Gaussian Quadratures –select functional values at non-uniformly distributed points to achieve higher accuracy –change of variables so that the interval of integration is [- 1,1] –Gauss-Legendre formulae

Gaussian Quadrature on [-1, 1] Exact integral for f = x 0, x 1, x 2, x 3 –Four equations for four unknowns

Gaussian Quadrature on [-1, 1] Exact integral for f = x 0, x 1, x 2, x 3

Gaussian Quadrature on [-1, 1] Exact integral for f = x 0, x 1, x 2, x 3, x 4, x 5

Summary Open book and open notes. The exam will be 5-8 problems. Short answer type problems use a table to differentiate between techniques. Problems are not going to be excessive. Make a short summary of the material. Only use your notes, when you have forgotten something, do not depend on them.