Download presentation
Presentation is loading. Please wait.
Published byCandace Daniels Modified over 9 years ago
1
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion
2
Asymptotic error expansion Example 2: Numerical integration via midpoint rule –Truncation error via Taylor expansion
3
Asymptotic error expansion In general, we assume In addition, we assume the asymptotic error expansion –Convergence –Order of convergence: p 1 Estimate order of convergence numerically –By log-log plot –By quotation
4
Richardson extrapolation Suppose we have the asymptotic error expansion With two different meshes: h:=h 1 < h 2
5
Richardson extrapolation Eliminating the leading order error term Equivalently, we have Better approximation, with order of accuracy: p 2
6
Richardson extrapolation Specifically, if we choose Similarly, we have
7
Romberg algorithm Choose a sequence of mesh Romberg algorithm based on Richardson extrapolation
8
An example Composite trapezoidal rule –Asymptotic error expansion –Richardson extrapolation
9
An example Romberg algorithm Exponential convergence rate
10
Numerical result Compute: Result by Romberg algorithm
11
Order of convergence
12
Exercises Suppose the asymptotic error expansion –Design the Richardson extrapolation with –Design the Romberg algorithm with
13
Stability and conditioning Example 1. Linear system –Numerical solution with 3 digits –Perturbation errors: small & stable
14
Stability and conditioning Example 2. Linear system –Numerical solution with 3 digits –Perturbation errors: extremely large & unstable!!!!!
15
Stability and conditioning In many cases, inaccuracies in computed results are much larger than the round-off errors and/or truncation errors introduced in the computation The reason may be that the errors were ``amplified’’ by the algorithm. We say that a problem is stable if the solution depends continuously on the input parameters –If ``small’’ changes are made to the input parameters, then the resulting changes in the solution will also be ``small –Mathematically,
16
Stability and conditioning If a problem is not stable, then it is said to be unstable Condition number of a problem: –A measure of the sensitivity of its solution to small perturbation of the input parameters –The ratio of the relative change in the solution to the relative change in the input parameters –It is significant in many problems because the round-off errors in the input to a problem may lead to large changes in the solution
17
Stability and conditioning –Small: well-conditioned problem –Large: ill-conditioned problem –It is a property of the problem to be solved itself, but not of the numerical algorithm employed to solve it!!! –It depends on the number of significant digits used in the computation Single precision Double precision ---- adapted in most current scientific computations 4 times precision Infinite precision
18
Perturbation analysis Solving the linear system Vector and matrix norm Perturbation in the right hand side
19
Perturbation analysis –The largest ratio of the relative change Condition number –Two examples
20
Some comments on condition number –k(A) is great or equal than 1!! It depends on the norm. –A is well-conditioned if k(A)=O(1) –A is ill-conditioned if k(A)>>1 –The linear system A x= b is well-conditioned (ill-conditioned) if A is well-conditioned (ill-conditioned) –For well-conditioned linear systems, the relative change in the solution is small if the relative change in the right-hand side is small –For ill-conditioned linear systems, the relative change is the sloution can be very large even the right hand side is small!!
21
A simple example Solve –Condition number: ill-conditioned !!!! –Perturb the right hand side –Perturbation in the error
22
Another example Consider –Plot the condition number of A for different degree –Solve
23
Condition number of A
24
Error of the solution
25
Some observations A is not singular since det(A)=1 !! Condition number increase exponentially When n=73, the condition number exceeds the double precision!! We solve the linear system by back substitution, thus no truncation. So the errors are fully due to the round-off errors!! The error is directly proportional to the condition number Round-off errors are important for large sparse matrix!
26
Perturbation on b Result Proof
27
Perturbation on A Result Proof: see details in class –Lemma 1. –Lemma 2
28
Perturbation on A & b Result Proof: see details in class
29
Efficient computation: Fast algorithms Example 1: compute power –Algorithm 1: Computational cost: 254 times multiplication
30
Efficient computation: Fast algorithms –Algorithm 2: Computational cost: 14 times multiplication
31
Efficient computation Example 2: Evaluate polynomial –Direction sum: cost is n(n+1)/2 –Fast algorithm: O(n)!!!
32
Review of numerical integration and differentiation Numerical integration –Basic quadratures Midpoint rule Trapezoidal rule Simpson’s rule –Composite techniques –Romberg algorithm – Richardson extrapolation –Gaussian quadratures --- high order Numerical differentiation –Finite difference & stencils
33
Review of function approximation & interpolation Function interpolation –Lagrange polynomial interpolation –Hermite polynomial interpolation –Cubic spine interpolation –Piecewise polynomial interpolation Function approximation –Orthogonal functions approximation –Least square approximation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.