Floating-Point and High-Level Languages

Slides:



Advertisements
Similar presentations
2009 Spring Errors & Source of Errors SpringBIL108E Errors in Computing Several causes for malfunction in computer systems. –Hardware fails –Critical.
Advertisements

Fabián E. Bustamante, Spring 2007 Floating point Today IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
University of Washington Today: Floats! 1. University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point.
1 CSE1301 Computer Programming Lecture 30: Real Number Representation.
CSE1301 Computer Programming Lecture 33: Real Number Representation
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
Floating-Point and High-Level Languages Programming Languages Spring 2004.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
ISBN Lecture 07 Expressions and Assignment Statements.
Simple Data Type Representation and conversion of numbers
Numbers and number systems
Information Representation (Level ISA3) Floating point numbers.
Introduction to Numerical Analysis I
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Data Representation in Computer Systems
CSC 221 Computer Organization and Assembly Language
Floating Point Arithmetic
1 Floating Point Operations - Part II. Multiplication Do unsigned multiplication on the mantissas including the hidden bits Do unsigned multiplication.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Binary Arithmetic.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
William Stallings Computer Organization and Architecture 8th Edition
MATH Lesson 2 Binary arithmetic.
Lesson Objectives Aims You should know about: Binary numbers ‘n’ that.
Floating Point Numbers
Introduction to Numerical Analysis I
Floating Point Representations
Machine arithmetic and associated errors Introduction to error analysis Class II.
7.2 Arithmetic Expressions
Floating Point CSE 238/2038/2138: Systems Programming
Floating Point Borrowed from the Carnegie Mellon (Thanks, guys!)
Floating Point Numbers
CS 105 “Tour of the Black Holes of Computing!”
Dr. Clincy Professor of CS
Integer Division.
Topics IEEE Floating Point Standard Rounding Floating Point Operations
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Numbers: x 10-18
CS 232: Computer Architecture II
CS 367 Floating Point Topics (Ch 2.4) IEEE Floating Point Standard
Modified from Sharon Guffy
A Level Computing Component 2
Chapter 6 Floating Point
Topic 3d Representation of Real Numbers
Recent from Dr. Dan Lo regarding 12/11/17 Dept Exam
CSCE 350 Computer Architecture
Floating Point Representation
Numeric Accuracy and Precision
CS 105 “Tour of the Black Holes of Computing!”
How to represent real numbers
How to represent real numbers
Dr. Clincy Professor of CS
Approximations and Round-Off Errors Chapter 3
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Arithmetic August 31, 2009
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Numbers - continuing
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
Storing Negative Integers
Chapter 3 DataStorage Foundations of Computer Science ã Cengage Learning.
Floating Point Numbers
Recent from Dr. Dan Lo regarding 12/11/17 Dept Exam
CS 105 “Tour of the Black Holes of Computing!”
CS213 Floating Point Topics IEEE Floating Point Standard Rounding
Review In last lecture, done with unsigned and signed number representation. Introduced how to represent real numbers in float format.
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Binary Part 1
CS 105 “Tour of the Black Holes of Computing!”
Presentation transcript:

Floating-Point and High-Level Languages Programming Languages Fall 2003

Floating-Point, the Basics Floating-point numbers are approximations of real numbers, but they are not real numbers. Typical format in a machine is sign exponent mantissa Exponent determines range available Mantissa determines precision Base is usually 2 (rarely 16, never 10)

The Notion of Precision Precision is relative. Large numbers have less absolute precision than small numbers For example if we have a 24 bit mantissa, then relative precision is 1 in 2**24 For 1.0, this is an absolute precision of 1.0*2**(-24). For 100, this is an absolute precision of 100*2**(-24).

Representing Numbers Some numbers can typically be represented exactly, e,g. 1.0, 2**(-13), 2**(+20) [assume 24 bit mantissa]. But other numbers are represented only approximately or not at all

Problems in Representation 2**(-9999999) Too small, underflows to 0.0 2**(+9999999) Too large, error or infinity 0.1 Cannot be represented exactly in binary (repeating fracion in binary) 145678325785.25 Representable in binary, but 24-bit mantissa too small to represent exactly

Floating-Point Operations Result may be representable exactly r = 81.0; s = 3.0; x = r / s; Machines typically have a floating-point division instruction  But it may not give the exact result 

Floating-Point Operations Result may not be representable exactly r = 1.0; s = 10.0; t = r / s; Result cannot be precisely corrrect Will it be rounded to nearest bit, or perhaps truncated towards zero, or perhaps even more inaccurate, all are possible.

Unexpected Results Let’s look at this code a = 1.0; b = 10.0; c = a / b; if (c == 0.1) printf (“hello1”); if (c == 1.0/10.0) printf (“goodbye”); if (c == a/b) printf (“what the %$!”); We may get nothing printed!

Why was Nothing Printed? if (c == 0.1) … In this case, we have stored the result of the run-time computation of 0.1, but it’s not quite precise, in c. The other operand has been converted to a constant by the compiler. Both are good approximations of 0.1 But neither are accurate And perhaps they are a little bit different

Why Was Nothing Printed? if (c == 1.0 / 10.0) … The compiler may compute 1.0/10.0 at compile time and treat it as though it had seen 0.1, and get a different result Really ends up being the same as last case

Why Was Nothing Printed? if (c == a/b) Now surely we should get the same computation. Maybe not, compiler may be clever enough to know that a/b is 0.1 in one case and not in the other.

Now Let’s Get Something Printed! Read in value of a at run time Read in value of b at run time Compiler knows nothing Now we will get some output or else! c = a / b; if (c == a/b) printf (“This will print!”);

Still Nothing Printed!!! How can that be First a bit of background Typically we have two or more different precisions of floating-point values, with different length mantissas In registers we use only the higher precision form, expanding on a load, rounding on a store.

What Happened? c = a / b; if (c == a/b) printf (“This will print!”); First compute a/b in high precision Now round to fit in low precision c, loosing significant bits Compute a/b in high precision into a register, load c into a register, expanding Comparison does not say equal

Surprises in Precision Let’s compute x**4 Two methods: Result = x*x*x*x; Result = (x*x)**2 Second has only two multiplications, instead of 3, must be more accurate. Nope, first is more accurate!

Subtleties of Rounding Suppose we insist on floating-point operations being properly rounded. What does properly rounded mean for 0.5 Typical rule, round up always if half way Introduces Bias Some computations sensitive to this bias Computation of orbit of pluto significantly off because of this problem

Moral of this Story Floating-point is full of surprises If you base your expectations on real arithmetic, you will be surprised On any given machine, floating-point operations are well defined But may be more or less peculiar But the semantics will differ from machine to machine

What to do in High Level Languages We can punt. We just say that floating-point numbers are some approximation of real numbers, and that the results of floating-point operations are some approximation of the real results. Nice and simple from a language definition point of view Fortran and C historically did this Not so simple for a poor application programmer

Doing a Bit Better Parametrize the machine model of floating-point. What exponent range does it have, what precision of the mantissa. Define fpt model in terms of these parameters. Insist on results being accurate where possible, or one of two end points it not. This is the approach of Ada

Doing Quite a Bit Better What if all machines had exactly the same floating-point model? IEEE floating-point heads in that direction Precisely defines two floating-point formats (32-bit and 64-bit) and precisely defines operations on them.

More on IEEE We could define our language to require IEEE semantics for floating-point. But what if the machine does not efficiently implement IEEE For example, x86 implements the two formats, but all registers have an 80-bit format, so you get extra precision Which sounds good, but is as we have seem a possible reason for suprising behavior.

IEEE and High Level Languages Java and Python both expect/require IEEE semantics for arithmetic. Java wants high efficiency, which causes a clash if the machine does not support IEEE in the “right” way. Java is potentially inefficient on x86 machines. Solution: cheat  Python requires IEEE too, but does not care so much about efficiency.