A Digression On Using Floating Points

Slides:



Advertisements
Similar presentations
2009 Spring Errors & Source of Errors SpringBIL108E Errors in Computing Several causes for malfunction in computer systems. –Hardware fails –Critical.
Advertisements

ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
CSC1016 Coursework Clarification Derek Mortimer March 2010.
Approximations and Errors
Assembly Language and Computer Architecture Using C++ and Java
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 3 Approximations and Errors.
Floating-Point and High-Level Languages Programming Languages Spring 2004.
Computer Science 210 Computer Organization Floating Point Representation.
Exponents Scientific Notation
Binary Arithmetic Math For Computers.
Python Programming, 2/e1 Python Programming: An Introduction to Computer Science Chapter 3 Computing with Numbers.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 31.
(2.1) Fundamentals  Terms for magnitudes – logarithms and logarithmic graphs  Digital representations – Binary numbers – Text – Analog information 
Vahé Karamian Python Programming CS-110 CHAPTER 3 Computing with Numbers.
Computer Science Engineering B.E.(4 th sem) c omputer system organization Topic-Floating and decimal arithmetic S ubmitted to– Prof. Shweta Agrawal Submitted.
Pythagorean Theorem Rachel Griffith. Pythagoras was a Greek philosopher and mathematician who founded the Pythagorean Theorem. He also discovered the.
Representing numbers and Basic MATLAB 1. Representing numbers  numbers used by computers do not behave the same as numbers used in mathematics  e.g.,
CSC 221 Computer Organization and Assembly Language
16. Binary Numbers Programming in C++ Computer Science Dept Va Tech August, 1999 © Barnette ND, McQuain WD, Keenan MA 1 Binary Number System Base.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter.
MECN 3500 Inter - Bayamon Lecture 3 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
ITEC 1011 Introduction to Information Technologies 4. Floating Point Numbers Chapt. 5.
CSPP58001 Floating Point Numbers. CSPP58001 Floating vs. fixed point Floating point refers to a binary decimal representation where there is not a fixed.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 3.
Numerical Analysis CC413 Propagation of Errors.
CS1Q Computer Systems Lecture 2 Simon Gay. Lecture 2CS1Q Computer Systems - Simon Gay2 Binary Numbers We’ll look at some details of the representation.
Errors in Numerical Methods
Primitive Data Types. int This is the type you are familiar with and have been using Stores an integer value (whole number) between -2,147,483,648 (-2.
CS 115 Lecture 5 Math library; building a project Taken from notes by Dr. Neil Moore.
Module 2.2 Errors 03/08/2011. Sources of errors Data errors Modeling Implementation errors Absolute and relative errors Round off errors Overflow and.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
CSCI206 - Computer Organization & Programming
Floating Point Numbers
Floating Point Representations
Backgrounder: Binary Math
MODULE – 1 The Number System
CMSC201 Computer Science I for Majors Lecture 05 – Comparison Operators and Boolean (Logical) Operators Prof. Katherine Gibson Based on slides by Shawn.
Computer Science 210 Computer Organization
Control Structure: Multiple Selections
A brief comparison of integer and double representation
Dr. Clincy Professor of CS
Floating Point Math & Representation
More important details More fun Part 3
Dr. Clincy Professor of CS
Exponents Scientific Notation
Floating-Point and High-Level Languages
IEEE floating point format
A Level Computing Component 2
Chapter 6 Floating Point
Chapter 2 ERROR ANALYSIS
Recent from Dr. Dan Lo regarding 12/11/17 Dept Exam
Binary Numbers Material on Data Representation can be found in Chapter 2 of Computer Architecture (Nicholas Carter) CSC 370 (Blum)
Compilation VS Interpretation
CSCI206 - Computer Organization & Programming
Mr Barton’s Maths Notes
Numeric Accuracy and Precision
Computer Science 210 Computer Organization
Dr. Clincy Professor of CS
Scientific Notation.
Dr. Clincy Professor of CS
Approximations and Round-Off Errors Chapter 3
Floating-point primitive data types
CS 101 – Sept. 4 Number representation Integer Unsigned √ Signed √
CS190/295 Programming in Python for Life Sciences: Lecture 3
Numerical Analysis Lecture 2.
Mr Barton’s Maths Notes
Comparing floating point numbers
Recent from Dr. Dan Lo regarding 12/11/17 Dept Exam
Floating Point Numbers
Floating Point Binary Part 1
Presentation transcript:

A Digression On Using Floating Points 01204111 Computers and Programming Chalermsak Chatdokmaiprai Department of Computer Engineering Kasetsart University Cliparts are taken from http://openclipart.org Revised 2018-02-17

Task: Is it a root of the equation? Write a function to check if a given number is a root of the equation X2 + 3X - 10 = 0 Define a function to do the task. def isroot(x): return x**2 + 3*x - 10 == 0 Call the function to check if the given number is a root. >>> print(isroot(2)) True >>> isroot(-3) False >>> isroot(-5) >>> isroot(0) In interactive mode, print() can be omitted.

Let’s try another equation Write a function to check if a given number is a root of the equation X2 – 0.9X – 0.1 = 0 Define a function to do the task. def isroot(x): return x**2 - 0.9*x - 0.1 == 0 Test the function. The roots should be -0.1 and 1 >>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) Oh-O! Why false? It should be true.

Floating-point calculations are often inexact (but a close approximation). def isroot(x): return x**2 - 0.9*x - 0.1 == 0 Let’s investigate why this became False when it should be True. >>> isroot(1) False Let’s see the value of each term when x is 1. >>> 1**2 1 >>> -0.9*1 -0.9 >>> 1-0.9 0.09999999999999998 >>> 1**2 - 0.9*1 - 0.1 -2.7755575615628914e-17 >>> 1**2 - 0.9*1 - 0.1 == 0 False Oh-O! This is not 0.1 as it should be. Just a close approximation. So the result is not 0 but it is -0.0000000000000000227755575615628914 which is a good approximation of zero. Now we know why this comparison yields False.

The reason behind floating-point inexactness: Modern computers use binary, not decimal, representations of numbers. Python uses binary floating-point values (type float) to represent fractional numbers such as 0.1, -30.625, 3.1416, etc. Some fractional numbers such as 0.1, 0.2, 0.9, when converted into binary, become repeating binary fractions. For example, That's why it's not possible to hold the exact value of some numbers, e.g. 0.1, in a fixed-sized floating point representation. Decimal: 0.1 Binary: 0.0001 1001 1001 1001 1001 1001 1001 1001 …

Example of floating-point inexactness Let's see how the fractional decimal 0.1 stored in computers as a floating point. 0.1 This is its binary equivalent, a repeating binary fraction. 0.0001 1001 1001 1001 1001 1001 … Converted into a normalized binary scientific notation, 1.1001 1001 1001 1001 … * 2-4 which in turn converted into a 64-bit floating point. 0 01111111011 1001100110011001100110011001100110011001100110011001 Notice that the repeating fraction has to be chopped off here to fit into 64-bit limit. 0.1000000000000000055511151231257827021181583404541015625 which is equivalent to this decimal number, not 0.1 but a pretty close approximation.

Some floating points are really exact. (Thanks goodness!) Let's see how the fractional decimal 5.375 stored in computers as a floating point. 5.375 This is its exact binary equivalent, having a non-repeating binary fraction. 101.011 Converted into a normalized binary scientific notation, 1.01011 * 22 which in turn converted into a 64-bit floating point. 0 10000000001 0101100000000000000000000000000000000000000000000000 Chopping zeros off to fit into 64-bit limit has no effect on precision. 5.375 which is exactly 5.375 in decimal.

Rounding Errors The discrepancy between an actual number and its approximated, rounded value is called a rounding error. 0.1 0.0001 1001 1001 1001 1001 1001 … 1.1001 1001 1001 1001 … * 2-4 0 01111111011 1001100110011001100110011001100110011001100110011001 0.1000000000000000055511151231257827021181583404541015625

Accumulative Rounding Errors All these expressions should have yielded the same value 3.333 but they didn't. >>> 0.1*33.33 3.3330000000000002 >>> 33.33/10 3.3329999999999997 >>> (1-0.9)*33.33 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 333.3*(1-0.9)*(1-0.9) 3.3329999999999984 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 The more calculations, the larger rounding errors. All these rounding errors are unsurprising results of floating-point inexactness.

Does a rounding error really matter? >>> 0.1*33.33 3.3330000000000002 >>> 33.33/10 3.3329999999999997 >>> (1-0.9)*33.33 3.3329999999999993 >>> 333.3*0.1*0.1 3.3330000000000006 >>> 3.33*(1-0.9)/0.1 3.3299999999999987 >>> 3.333*(1-0.9)/0.1*(1-0.9)*10 3.332999999999999 >>> Most of the time it does not (thanks heaven!), because the rounding error is usually very small (at the 15th-16th decimal places in this example).

Does a rounding error really matter? b = 33.33/10 c = (1-0.9)*33.33 d = 333.3*0.1*0.1 e = 333.3*(1-0.9)*(1-0.9) f = 3.333*(1-0.9)/0.1*(1-0.9)*10 print(f'{a:.6f}') print(f'{b:.6f}') print(f'{c:.6f}') print(f'{d:.6f}') print(f'{e:.6f}') print(f'{f:.6f}') Also, most programs only care to print just the first few digits of the results so the rounding error is rarely visible or bothering to us Output 3.333000

But the real perils are: Tests for floating-point equality (or inequality) c = (1-0.9)*33.33 d = 333.3*0.1*0.1 e = 333.3*(1-0.9)*(1-0.9) f = 3.333*(1-0.9)/0.1*(1-0.9)*10 print(a == b, a == c, a == d, a == e, a == f) print(b == c, b == d, b == e, b == f) print(c == d, c == e, c == f) print(d == e, d == f) print(e == f) print(a != b) print(c != d) print(e != f) Output False False False False False False False False False False False False False False False True The test results are all mathematically wrong but we're not really surprised because we know why.

Some more funny, useless floating-point equality tests >>> 0.7+0.1 == 0.8 False >>> 0.7+0.1 == 0.6+0.2 >>> 0.1*3 == 0.3 >>> 0.1+0.1+0.1 == 0.3 >>> 0.1*0.1 == 0.01 >>> 1-0.9 == 0.1 >>> 0.1*6 == 0.5+0.1 >>> 0.2*3 == 0.6 >>> 0.3*2 == 10-9.4 >>> 1.1+2.2 == 3.3 False >>> 0.3/0.7 == 3/7 >>> (1/10)*3 == 0.3 >>> (1/10)*3 == 1*3/10 >>> 3.3/10 == 3.3*0.1 >>> 3.3/10 == 0.33 >>> 6*0.1 == 0.6 >>> 6*(1-0.9) == 0.6 >>> 6*0.1 == 6*(1-0.9)

Tests for float equality can render some programs nearly useless. As we have seen in this implementation. The roots should be -0.1 and 1 def isroot(x): return x**2 - 0.9*x - 0.1 == 0 It tests for floating-point equality, which is dangerous. >>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) We're lucky that in these two cases the output is correct. But this is wrong so the function is untrustworthy for its duty.

So, what should we do to deal with the problem? Thou shalt sing the Mantra of Floating-Point Equality. "Inexact" doesn't mean "wrong" "Close enough" means "equal enough" which leads to the following rule of thumbs It's almost always more appropriate to ask whether two floating points are close enough to each other, not whether they are equal.

Test for "close enough" is much less perilous. In general, instead of using the perilous x == y as the test for equality of two floats x and y, we'd better use the expression |x - y| < epsilon where epsilon is a number tiny enough for the task. For example, suppose the task we are solving needs precision of only 5 decimal places. Then two floats x and y that differ from each other less than 0.000001 can be considered "equal" for our purpose. So we use the expression |x - y| < 0.000001 to test whether x and y are equal.

Test for "close enough" is much less perilous. Mathematically, x and y are equal. >>> x = 33.33/10 >>> y = (1-0.9)*33.33 >>> print(x, y) 3.3329999999999997 3.3329999999999993 >>> x == y False >>> abs(x-y) < 0.000001 True >>> But due to rounding errors, they become minutely different. And Python is honest enough to yield False for equality test. Now we apply the mantra "close enough means equal" to make the test result more in line with Mathematics.

Let's fix our function isroot() >>> def isroot(x): epsilon = 0.000001 return abs(x**2 - 0.9*x - 0.1) < epsilon Apply the mantra here >>> 0.9-1 -0.09999999999999998 >>> isroot(0.9-1) True >>> 5.23*10/52.3 1.0000000000000002 >>> isroot(5.23*10/52.3) >>> >>> isroot(2) False >>> isroot(-0.1) True >>> isroot(1) Such a mathematician's delight! Now we're very pleased that our function works in perfect agreement with Mathematics.

One more example

Task: Are they Pythagorean? Write a function to check if three given numbers a, b, and c satisfy the Pythagorean Equation: a2 + b2 = c2 Define a function to do the task. def isPythagorean(a, b, c): return a*a + b*b == c*c Test it >>> isPythagorean(3, 4, 5) True >>> isPythagorean(5, 12, 13) >>> isPythagorean(1.5, 2, 2.5) >>> isPythagorean(2.5, 6, 6.5) >>> isPythagorean(3, 6, 8) False >>> isPythagorean(1.8, 2.75, 12) Ho, Ho, Ho… I'm pleased with these results.

But this would make Pythagoras sad … >>> isPythagorean(0.33, 0.44, 0.55) False >>> isPythagorean(0.5, 1.2, 1.3) >>> from math import pi >>> isPythagorean(5*pi, 12*pi, 13*pi) >>> isPythagorean(8*1.1, 15*1.1, 17*1.1) Oh no! These are all outrageously wrong! def isPythagorean(a, b, c): return a*a + b*b == c*c But we know by now that the cause of the problem is the perilous test for floating-point equality here.

We know the healing mantra: Apply the mantra "close enough is equal enough" def isPythagorean(a, b, c): epsilon = 0.000001 return abs((a*a + b*b) - c*c) < epsilon And the results are such a Pythagorean delight! >>> isPythagorean(3, 4, 5) True >>> isPythagorean(5, 12, 13) >>> isPythagorean(0.33, 0.44, 0.55) >>> isPythagorean(0.5, 1.2, 1.3) >>> from math import pi >>> isPythagorean(5*pi, 12*pi, 13*pi) >>> isPythagorean(8*1.1, 15*1.1, 17*1.1)

The End

Revision History February 2018 – Chalermsak Chatdokmaiprai originally created for Python Constructive comments or error reports on this set of slides would be welcome and highly appreciated. Please contact Chalermsak.c@ku.ac.th