Round-off Errors.

Slides:



Advertisements
Similar presentations
Roundoff and truncation errors
Advertisements

2009 Spring Errors & Source of Errors SpringBIL108E Errors in Computing Several causes for malfunction in computer systems. –Hardware fails –Critical.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
CS 351/ IT 351 Modeling and Simulation Technologies Errors In Models Dr. Jim Holten.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Approximations and Errors
Round-Off and Truncation Errors
1 Part 3 Truncation Errors (Supplement). 2 Error Propagation How errors in numbers can propagate through mathematical functions? That is, what's the effect.
Lecture 2: Numerical Differentiation. Derivative as a gradient
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 3 Approximations and Errors.
Revision.
Part 3 Truncation Errors.
CS180 Recitation 3. Lecture: Overflow byte b; b = 127; b += 1; System.out.println("b is" + b); b is -128 byte b; b = 128; //will not compile! b went out.
Second Term 05/061 Part 3 Truncation Errors (Supplement)
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
1 ECE369 Chapter 3. 2 ECE369 Multiplication More complicated than addition –Accomplished via shifting and addition More time and more area.
Representation and Conversion of Numeric Types 4 We have seen multiple data types that C provides for numbers: int and double 4 What differences are there.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Data Representation Number Systems.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 31.
Introduction and Analysis of Error Pertemuan 1
Floating Point Computation
Simple Data Type Representation and conversion of numbers
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Computer Arithmetic Programming in C++ Computer Science Dept Va Tech August, 2000 © Barnette ND, McQuain WD, Keenan MA 1 Independent Representation.
Lecture 2 Number Representation and accuracy
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
Introduction to Numerical Analysis I
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Data Representation in Computer Systems
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
Computer Arithmetic and the Arithmetic Unit Lesson 2 - Ioan Despi.
Round-off Errors and Computer Arithmetic. The arithmetic performed by a calculator or computer is different from the arithmetic in algebra and calculus.
CSC 221 Computer Organization and Assembly Language
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter.
Floating Point Arithmetic
MECN 3500 Inter - Bayamon Lecture 3 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Fixed & Floating Number Format Dr. Hugh Blanton ENTC 4337/5337.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 3.
Scientific Measurement Measurements and their Uncertainty Dr. Yager Chapter 3.1.
Numerical Analysis CC413 Propagation of Errors.
CS 232: Computer Architecture II Prof. Laxmikant (Sanjay) Kale Floating point arithmetic.
Sources of Computational Errors FLP approximates exact computation with real numbers Two sources of errors to understand and counteract: 1. Representation.
ESO 208A/ESO 218 LECTURE 2 JULY 31, ERRORS MODELING OUTPUTS QUANTIFICATION TRUE VALUE APPROXIMATE VALUE.
Numerical Analysis CC413 Propagation of Errors. 2 In numerical methods, the calculations are not made with exact numbers. How do these inaccuracies propagate.
Module 2.2 Errors 03/08/2011. Sources of errors Data errors Modeling Implementation errors Absolute and relative errors Round off errors Overflow and.
COSC2410: LAB 2 BINARY ARITHMETIC SIGNED NUMBERS FLOATING POINT REPRESENTATION BOOLEAN ALGEBRA 1.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Introduction to Numerical Analysis I
Floating Point Representations
CS 232: Computer Architecture II
Machine arithmetic and associated errors Introduction to error analysis (cont.) Class III.
Chapter 6 Floating Point
Chapter 2 ERROR ANALYSIS
Recall our hypothetical computer Marc-32
Errors in Numerical Methods
Errors in Numerical Methods
How to represent real numbers
Approximations and Round-Off Errors Chapter 3
Chapter 2: Floating point number systems and Round-off error
Chapter 1 / Error in Numerical Method
Errors and Error Analysis Lecture 2
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Presentation transcript:

Round-off Errors

Key Concepts Round-off / Chopping Errors Recognize how floating point arithmetic operations can introduce and amplify round-off errors What can be done to reduce the effect of round-off errors

There are discrete points on the number lines that can be represented by our computer. How about the space between ?

Implication of FP representations Only limited range of quantities may be represented. Overflow and underflow Only a finite number of quantities within the range may be represented. round-off errors or chopping errors

Round-off / Chopping Errors (Error Bounds Analysis) Let z be a real number we want to represent in a computer, and fl(z) be the representation of z in the computer. What is the largest possible value of ? i.e., in the worst case, how much data are we losing due to round-off or chopping errors?

Chopping Errors (Error Bounds Analysis) Suppose the mantissa can only support n digits. Thus the absolute and relative chopping errors are Suppose ß = 10 (base 10), what are the values of ai such that the errors are the largest?

Chopping Errors (Error Bounds Analysis)

Round-off Errors (Error Bounds Analysis) Round down Round up fl(z) is the rounded value of z

Round-off Errors (Error Bounds Analysis) Absolute error of fl(z) When rounding down Similarly, when rounding up i.e., when

Round-off Errors (Error Bounds Analysis) Relative error of fl(z)

Summary of Error Bounds Analysis Chopping Errors Round-off errors Absolute Relative β base n # of significant digits or # of digits in the mantissa Regardless of chopping or round-off is used to round the numbers, the absolute errors may increase as the numbers grow in magnitude but the relative errors are bounded by the same magnitude.

Machine Epsilon Relative chopping error Relative round-off error eps is known as the machine epsilon – the smallest number such that 1 + eps > 1 epsilon = 1; while (1 + epsilon > 1) epsilon = epsilon / 2; epsilon = epsilon * 2; Algorithm to compute machine epsilon

Propagation of Errors Each number or value of a variable is represented with error These errors (Ex and Ey) are carried over to the result of every arithmetic operation (+, -, x, ⌯) How much error is propagated to the result of each arithmetic operation?

Example #1 Assume 4 decimal mantissa with rounding are used (Final value after round-off) How many types of errors and how much errors are introduced to the final value?

Propagated Error: (xT + yT) - (xA + yA) = Ex+Ey Example #1 Propagated Error: (xT + yT) - (xA + yA) = Ex+Ey Propagated Error = -0.4000x10-1 + 0.3000x10-3

Example #1 Rounding Error:

Finally, the total error is Example #1 Finally, the total error is The total error is the sum of the propagated error and rounding error

Propagation of Errors (In General) Let  be the operation between xT and yT  can be any of +, -, x, ⌯ Let * be the corresponding operation carried out by the computer Note: xA  yA ≠ xA * yA

Propagation of Errors (In General) Error between the true result and the computed result is (xT  yT ) – (xA * yA) = (xT  yT – xA  yA) + (xA  yA – xA * yA) Errors in x and y propagated by the operation Rounding error of the result |xA  yA – xA * yA| = fl(xA  yA) ≤ |xA  yA | x eps

Analysis of Propagated Errors Addition and Subtraction

Propagated Errors – Multiplication Very small and can be neglected

Propagated Errors – Division if εy is small and negligible

Example #2 Effects of rounding errors in arithmetic manipulations Assuming 4-digit decimal mantissa Round-off in simple multiplication or division Results by the computer:

Danger of adding/subtracting a small number to/from a large number Possible workarounds: 1) Sort the numbers by magnitude (if they have the same signs) and add the numbers in increasing order 2) Reformulate the formula algebraically

Associativity not necessarily hold for floating point addition (or multiplication) The two answers are NOT the same! Note: In this example, if we simply sort the numbers by magnitude and add the number in increasing order, we actually get worse answer! Better approach is analyze the problem algebraically.

Subtraction of two close numbers The result will be normalized into 0.9550 x 101 However, note that the zero added to the end of the mantissa is not significant. Note: 0.9550 x 101 implies the error is about ± 0.00005 x 101 but the actual error could be as big as ± 0.00005 x 102

Subtractive Cancellation – Subtraction of two very close numbers The error bound is just as large as the estimation of the result! Subtraction of nearly equal numbers are major cause of errors! Avoid subtractive cancellation whenever possible.

Avoiding Subtractive Cancellations Example 1: When x is large, compute Is there a way to reduce the errors assuming that we are using the same number of bits to represent numbers? Answer: One possible solution is via rationalization

Subtraction of nearly equal numbers Example 2: Compute the roots of ax2 + bx + c = 0 using Solve x2 – 26x + 1 = 0

Assume 5 decimal mantissa, Example 2 (continue) Assume 5 decimal mantissa, implies that one solution is more accurate than the other one.

Alternatively, a better solution is Example 2 (continue) Alternatively, a better solution is with i.e., instead of computing we use as the solution for the second root

Note: This formula does NOT give more accurate result in ALL cases. We have to be careful when writing numerical programs. A prior estimation of the answer, and the corresponding error, is needed first. If the error is large, we have to use alternative methods to compute the solution.

Assignment 1 (Problem 1) Assume 3 decimal mantissa with rounding Evaluate f(1000) directly. Evaluate f(1000) as accurate as possible using an alternative approach. Find the relative error of f(1000) in part (a) and (b).

Propagation of Errors in a Series Let the series be Is there any difference between adding (((x1 + x2) +x3) +x4) +…+xm and (((xm + xm-1) +xm-2) +xm-3) +…+x1

Example: for (i = 0; i < 100000; i++) { sumx = sumx + x; sumy = sumy + y; sumz = sumz + z; } printf("%sumx = %f\n", sumx); printf("%sumy = %f\n", sumy); printf("%sumz = %f\n", sumz); return 0; #include <stdio.h> int main() { float sumx, x; float sumy, y; double sumz, z; int i; sumx = 0.0; sumy = 0.0; sumz = 0.0; x = 1.0; y = 0.00001; z = 0.00001; Output: sumx = 100000.000000 sumy = 1.000990 sumz = 0.99999999999808375506

Exercise Discuss to what extent (a + b)c = ac + bc is violated in machine arithmetic.

Example: Evaluate ex as #include <stdio.h> #include <math.h> int main() { float x = 10, sum = 1, term = 1, temp = 0; int i = 0; while (temp != sum) { i++; term = term * x / i; temp = sum; sum = sum + term; printf("%2d %-12f %-14f\n", i, term, sum); } printf("exact value = %f\n", exp((double)x)); return 0;

Output (when x = 10) term sum 17 281.145752 21711.982422 17 281.145752 21711.982422 18 156.192078 21868.173828 19 82.206360 21950.380859 20 41.103180 21991.484375 21 19.572943 22011.056641 22 8.896792 22019.953125 23 3.868171 22023.822266 24 1.611738 22025.433594 25 0.644695 22026.078125 26 0.247960 22026.326172 27 0.091837 22026.417969 28 0.032799 22026.451172 29 0.011310 22026.462891 30 0.003770 22026.466797 31 0.001216 22026.468750 32 0.000380 22026.468750 exact value = 22026.465795 1 10.000000 11.000000 2 50.000000 61.000000 3 166.666672 227.666672 4 416.666687 644.333374 5 833.333374 1477.666748 6 1388.889038 2866.555664 7 1984.127197 4850.682617 8 2480.158936 7330.841797 9 2755.732178 10086.574219 10 2755.732178 12842.306641 11 2505.211182 15347.517578 12 2087.676025 17435.193359 13 1605.904541 19041.097656 14 1147.074585 20188.171875 15 764.716431 20952.888672 16 477.947754 21430.835938

Example: Evaluate ex as #include <stdio.h> #include <math.h> int main() { float x = 10, sum = 1, term = 1, temp = 0; int i = 0; while (temp != sum) { i++; term = term * x / i; temp = sum; sum = sum + term; printf("%2d %-12f %-14f\n", i, term, sum); } printf("exact value = %f\n", exp((double)x)); return 0; Arithmetic operations that introduce errors

Output (when x = -10) term sum Not just incorrect answer! 29 -0.011310 -0.002908 30 0.003770 0.000862 31 -0.001216 -0.000354 32 0.000380 0.000026 33 -0.000115 -0.000089 34 0.000034 -0.000055 35 -0.000010 -0.000065 36 0.000003 -0.000062 37 -0.000001 -0.000063 38 0.000000 -0.000063 39 -0.000000 -0.000063 40 0.000000 -0.000063 41 -0.000000 -0.000063 42 0.000000 -0.000063 43 -0.000000 -0.000063 44 0.000000 -0.000063 45 -0.000000 -0.000063 46 0.000000 -0.000063 exact value = 0.000045 1 -10.000000 -9.000000 2 50.000000 41.000000 3 -166.666672 -125.666672 4 416.666687 291.000000 5 -833.333374 -542.333374 6 1388.889038 846.555664 7 -1984.127197 -1137.571533 8 2480.158936 1342.587402 9 -2755.732178 -1413.144775 10 2755.732178 1342.587402 11 -2505.211182 -1162.623779 12 2087.676025 925.052246 13 -1605.904541 -680.852295 … Not just incorrect answer! We get negative value!

Errors vs. Number of Arithmetic Operations Assume 3-digit mantissa with rounding (a) Evaluate y = x3 – 3x2 + 4x + 0.21 for x = 2.73 (b) Evaluate y = [(x – 3)x + 4] x + 0.21 for x = 2.73 Compare and discuss the errors obtained in part (a) and (b).

Summary Round-off/chopping errors Analysis Propagation of errors in arithmetic operations Analysis and Calculation How to minimize propagation of errors Avoid adding huge number to small number Avoid subtracting numbers that are close Minimize the number of arithmetic operations involved