Presentation is loading. Please wait.

Presentation is loading. Please wait.

Numerical Analysis CC413 Propagation of Errors.

Similar presentations


Presentation on theme: "Numerical Analysis CC413 Propagation of Errors."— Presentation transcript:

1 Numerical Analysis CC413 Propagation of Errors

2 Propagation of Errors In numerical methods, the calculations are not made with exact numbers. How do these inaccuracies propagate through the calculations?

3 Underflow and Overflow
Numbers occurring in calculations that have a magnitude less than ( ) result in underflow and are generally set to zero. Numbers greater than ( ) result in overflow.

4 Significant Figures Number of significant figures indicates precision. Significant digits of a number are those that can be used with confidence, e.g., the number of certain digits plus one estimated digit. 53,800 How many significant figures? 5.38 x 5.380 x x Zeros are sometimes used to locate the decimal point not significant figures.

5 Error Definitions Numerical error - use of approximations to represent exact mathematical operations and quantities true value = approximation + error error, et=true value - approximation subscript t represents the true error shortcoming....gives no sense of magnitude normalize by true value to get true relative error

6 Example 1: Find the bounds for the propagation in adding two numbers. For example if one is calculating X +Y where X = 1.5 ± 0.05 Y = 3.4 ± 0.04 Solution Maximum possible value of X = 1.55 and Y = 3.44 Maximum possible value of X + Y = = 4.99 Minimum possible value of X = 1.45 and Y = Minimum possible value of X + Y = = 4.81 Hence 4.81 ≤ X + Y ≤4.99.

7 Example 2: The strain in an axial member of a square cross-section is given by Given Find the maximum possible error in the measured strain.

8 Example 2: Solution THG picture

9 Error definitions cont.
True relative percent error

10 Example Consider a problem where the true answer is If you report the value as 7.92, answer the following questions. What is the true error? What is the relative error?

11 Example Determine the absolute and relative errors when approximating p by p∗ when

12 Solution This example shows that the same relative error, ×10−1, occurs for widely varying absolute errors. the absolute error can be misleading and the relative error more meaningful, because the relative error takes into consideration the size of the value.

13 Error Definitions cont.
Round off error – Symmetric rounding originate from the fact that computers retain only a fixed number of significant figures: y = to be Error = y-round(y) = and relative error = error /y = Truncation errors – Chopping errors that result from using an approximation in place of an exact mathematical procedure: y = to be Error = and relative error =

14 Example Determine the five-digit (a) chopping and (b) rounding values of the irrational number π. Solution The number π has an infinite decimal expansion of the form π = Written in normalized decimal form, we have π = × 10. (a) The floating-point form of π using five-digit chopping is f l(π) = × 10 = (b) The sixth digit of the decimal expansion of π is a 9, so the floating-point form of π using five-digit rounding is f l(π) = ( ) × 10 =

15 Computations are repeated until stopping criterion is satisfied.
Use absolute value. Computations are repeated until stopping criterion is satisfied. If the following criterion is met you can be sure that the result is correct to at least n significant figures. Pre-specified % tolerance based on the knowledge of your solution

16 Numerical Stability Rounding errors may accumulate and propagate unstably in a bad algorithm. Can be proven that for Gaussian elimination the accumulated error is bounded

17 Example Solution: Note that 
Suppose that x = 57 and y = 13. Use five-digit chopping for calculating x + y, x − y, x × y, and x ÷ y. Solution: Note that

18 Solution (Cont.)

19 Chopping Errors (Error Bounds Analysis)
Suppose the mantissa can only support n digits. Thus the absolute and relative chopping errors are Suppose ß = 10 (base 10), what are the values of ai such that the errors are the largest?

20 Chopping Errors (Error Bounds Analysis)

21 Round-off Errors (Error Bounds Analysis)
Round down Round up fl(z) is the rounded value of z

22 Round-off Errors (Error Bounds Analysis) Absolute error of fl(z)
When rounding down Similarly, when rounding up i.e., when

23 Round-off Errors (Error Bounds Analysis) Relative error of fl(z)

24 Summary of Error Bounds Analysis
Chopping Errors Round-off errors Absolute Relative β base n # of significant digits or # of digits in the mantissa Regardless of chopping or round-off is used to round the numbers, the absolute errors may increase as the numbers grow in magnitude but the relative errors are bounded by the same magnitude.

25 Machine Epsilon Relative chopping error Relative round-off error
eps is known as the machine epsilon – the smallest number such that 1 + eps > 1 epsilon = 1; while (1 + epsilon > 1) epsilon = epsilon / 2; epsilon = epsilon * 2; Algorithm to compute machine epsilon

26 Exercise Discuss to what extent (a + b)c = ac + bc
is violated in machine arithmetic.

27 How does a CPU compute the following functions for a specific x value?
cos(x) sin(x) ex log(x) etc. Non-elementary functions such as trigonometric, exponential, and others are expressed in an approximate fashion using Taylor series when their values, derivatives, and integrals are computed. Taylor series provides a means to predict the value of a function at one point in terms of the function value and its derivatives at another point.

28 Taylor Series (nth order approximation):
The Reminder term, Rn, accounts for all terms from (n+1) to infinity. Define the step size as h=(xi+1- xi), the series becomes:

29 Any smooth function can be approximated as a polynomial.
Take x = xi+1 Then f(x) ≈ f(xi) zero order approximation first order approximation Second order approximation: nth order approximation: Each additional term will contribute some improvement to the approximation. Only if an infinite number of terms are added will the series yield an exact result. In most cases, only a few terms will result in an approximation that is close enough to the true value for practical purposes

30 Taylor Series Expansion

31 Example Use zero through fourth order Taylor series expansion to approximate f(1) given f(0) = 1.2 (i.e. h = 1) Note: f(1) = 0.2

32 Solution n=0 n=1 f(1) = 1.2 et = abs [(0.2 - 1.2)/0.2] x 100 = 500%
f '(x) = -0.4x x2 -x -0.25 f '(0) = -0.25 f(1) = h = 0.95 et =375%

33 Solution n=2 f "=-1.2 x2 - 0.9x -1 f "(0) = -1 f(1) = 0.45 et = 125%

34 Solution n=4 f ""(0) = -2.4 f(1) = 0.2 EXACT
Why does the fourth term give us an exact solution? The 5th derivative is zero In general, nth order polynomial, we get an exact solution with an nth order Taylor series

35 Example Approximate the function f(x) = 1. 2 - 0. 25x - 0. 5x2 - 0
Example Approximate the function f(x) = x x x x4 from xi = 0 with h = 1 and predict f(x) at xi+1 = 1.

36 Taylor Series Problem Use zero- through fourth-order Taylor series expansions to predict f(4) for f(x) = ln x using a base point at x = 2. Compute the percent relative error et for each approximation.

37 computing f(x) = ex using Taylor Series expansion
Example: computing f(x) = ex using Taylor Series expansion Choose x = xi+1 and xi = 0 Then f(xi+1) = f(x) and (xi+1 – xi) = x Since First Derivative of ex is also ex : (2.) (ex )” = ex (3.) (ex)”’ = ex, … (nth.) (ex)(n) = ex As a result we get: Looks familiar? Maclaurin series for ex

38 computing f(x) = cos(x) using Taylor Series expansion
Yet another example: computing f(x) = cos(x) using Taylor Series expansion Choose x=xi+1 and xi=0 Then f(xi+1) = f(x) and (xi+1 – xi) = x Derivatives of cos(x): (1.) (cos(x) )’ = -sin(x) (2.) (cos(x) )” = -cos(x), (3.) (cos(x) )”’ = sin(x) (4.) (cos(x) )”” = cos(x), …… As a result we get:

39 Error Propagation Let xfl refer to the floating point representation of the real number x. Since computer has fixed word length, there is a difference between x and xfl (round-off error) and we would like to estimate the error in the calculation of f(x) : Both x and f(x) are unknown. If xfl is close to x, then we can use first order Taylor expansion and compute: Result: If f’(xfl) and Dx are known, then we can estimate the error using this formula


Download ppt "Numerical Analysis CC413 Propagation of Errors."

Similar presentations


Ads by Google