Floating-Point and High-Level Languages Programming Languages Spring 2004.

Slides:



Advertisements
Similar presentations
2009 Spring Errors & Source of Errors SpringBIL108E Errors in Computing Several causes for malfunction in computer systems. –Hardware fails –Critical.
Advertisements

Fabián E. Bustamante, Spring 2007 Floating point Today IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.
CENG536 Computer Engineering department Çankaya University.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Floating Point Numbers
1 IEEE Floating Point Revision Guide for Phase Test Week 5.
University of Washington Today: Floats! 1. University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point.
University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point standard: Definition Example and properties.
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
1 CSE1301 Computer Programming Lecture 30: Real Number Representation.
CSE1301 Computer Programming Lecture 33: Real Number Representation
Dr Damian Conway Room 132 Building 26
1 CSC 1401 Computer Programming I Hamid Harroud School of Science and Engineering, Akhawayn University
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
ISBN Lecture 07 Expressions and Assignment Statements.
Binary Representation and Computer Arithmetic
February 26, 2003MIPS floating-point arithmetic1 Question  Which of the following are represented by the hexadecimal number 0x ? —the integer.
Simple Data Type Representation and conversion of numbers
Numbers and number systems
Binary Representation. Binary Representation for Numbers Assume 4-bit numbers 5 as an integer  as an integer  How? 5.0 as a real number  How?
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Binary Real Numbers. Introduction Computers must be able to represent real numbers (numbers w/ fractions) Two different ways:  Fixed-point  Floating-point.
Information Representation (Level ISA3) Floating point numbers.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Numeric precision in SAS. Two aspects of numeric data in SAS The first is how numeric data are stored (how a number is represented in the computer). –
1 Lecture 5 Floating Point Numbers ITEC 1000 “Introduction to Information Technology”
CEN 316 Computer Organization and Design Computer Arithmetic Floating Point Dr. Mansour AL Zuair.
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI 230 Information Representation: Negative and Floating Point.
Fixed-Point Arithmetics: Part II
Computing Systems Basic arithmetic for computers.
Introduction to Numerical Analysis I
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Data Representation in Computer Systems
Lecture 5. Topics Sec 1.4 Representing Information as Bit Patterns Representing Text Representing Text Representing Numeric Values Representing Numeric.
Lecture 9: Floating Point
CSC 221 Computer Organization and Assembly Language
Floating Point Arithmetic
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI N305 Information Representation: Floating Point Representation.
Chapter 7 © 1998 by Addison -Wesley Longman, Inc Arithmetic Expressions - Their evaluation was one of the motivations for the development of the.
1 Floating Point Operations - Part II. Multiplication Do unsigned multiplication on the mantissas including the hidden bits Do unsigned multiplication.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
Monday, January 14 Homework #1 is posted on the website Homework #1 is posted on the website Due before class, Jan. 16 Due before class, Jan. 16.
Binary Arithmetic.
CH.3 Floating Point Hardware and Algorithms 2/18/
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
MATH Lesson 2 Binary arithmetic.
Introduction to Numerical Analysis I
Floating Point Representations
Machine arithmetic and associated errors Introduction to error analysis Class II.
7.2 Arithmetic Expressions
Topics IEEE Floating Point Standard Rounding Floating Point Operations
CS 105 “Tour of the Black Holes of Computing!”
Floating-Point and High-Level Languages
A Level Computing Component 2
Chapter 6 Floating Point
CSCE 350 Computer Architecture
Numeric Accuracy and Precision
How to represent real numbers
How to represent real numbers
Approximations and Round-Off Errors Chapter 3
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Presentation transcript:

Floating-Point and High-Level Languages Programming Languages Spring 2004

Floating-Point, the Basics Floating-point numbers are approximations of real numbers, but they are not real numbers. Floating-point numbers are approximations of real numbers, but they are not real numbers. Typical format in a machine is sign exponent mantissa Typical format in a machine is sign exponent mantissa Exponent determines range available Exponent determines range available Mantissa determines precision Mantissa determines precision Base is usually 2 (rarely 16, never 10) Base is usually 2 (rarely 16, never 10)

The Notion of Precision Precision is relative. Large numbers have less absolute precision than small numbers Precision is relative. Large numbers have less absolute precision than small numbers For example if we have a 24 bit mantissa, then relative precision is 1 in 2**24 For example if we have a 24 bit mantissa, then relative precision is 1 in 2**24 For 1.0, this is an absolute precision of 1.0*2**(-24). For 1.0, this is an absolute precision of 1.0*2**(-24). For 100, this is an absolute precision of 100*2**(-24). For 100, this is an absolute precision of 100*2**(-24).

Representing Numbers Some numbers can typically be represented exactly, e,g. 1.0, 2**(-13), 2**(+20) [assume 24 bit mantissa]. Some numbers can typically be represented exactly, e,g. 1.0, 2**(-13), 2**(+20) [assume 24 bit mantissa]. But other numbers are represented only approximately or not at all But other numbers are represented only approximately or not at all

Problems in Representation 2**( ) Too small, underflows to 0.0 2**( ) Too small, underflows to 0.0 2**( ) Too large, error or infinity 2**( ) Too large, error or infinity 0.1 Cannot be represented exactly in binary (repeating fracion in binary) 0.1 Cannot be represented exactly in binary (repeating fracion in binary) Representable in binary, but 24-bit mantissa too small to represent exactly Representable in binary, but 24-bit mantissa too small to represent exactly

Floating-Point Operations Result may be representable exactly r = 81.0; s = 3.0; x = r / s; Result may be representable exactly r = 81.0; s = 3.0; x = r / s; Machines typically have a floating-point division instruction Machines typically have a floating-point division instruction But it may not give the exact result  But it may not give the exact result 

Floating-Point Operations Result may not be representable exactly r = 1.0; s = 10.0; t = r / s; Result may not be representable exactly r = 1.0; s = 10.0; t = r / s; Result cannot be precisely corrrect Result cannot be precisely corrrect Will it be rounded to nearest bit, or perhaps truncated towards zero, or perhaps even more inaccurate, all are possible. Will it be rounded to nearest bit, or perhaps truncated towards zero, or perhaps even more inaccurate, all are possible.

Unexpected Results Let’s look at this code a = 1.0; b = 10.0; c = a / b; if (c == 0.1) printf (“hello1”); if (c == 1.0/10.0) printf (“goodbye”); if (c == a/b) printf (“what the %$!”); Let’s look at this code a = 1.0; b = 10.0; c = a / b; if (c == 0.1) printf (“hello1”); if (c == 1.0/10.0) printf (“goodbye”); if (c == a/b) printf (“what the %$!”); We may get nothing printed! We may get nothing printed!

Why was Nothing Printed? if (c == 0.1) … if (c == 0.1) … In this case, we have stored the result of the run-time computation of 0.1, but it’s not quite precise, in c. In this case, we have stored the result of the run-time computation of 0.1, but it’s not quite precise, in c. The other operand has been converted to a constant by the compiler. The other operand has been converted to a constant by the compiler. Both are good approximations of 0.1 Both are good approximations of 0.1 But neither are accurate But neither are accurate And perhaps they are a little bit different And perhaps they are a little bit different

Why Was Nothing Printed? if (c == 1.0 / 10.0) … if (c == 1.0 / 10.0) … The compiler may compute 1.0/10.0 at compile time and treat it as though it had seen 0.1, and get a different result The compiler may compute 1.0/10.0 at compile time and treat it as though it had seen 0.1, and get a different result Really ends up being the same as last case Really ends up being the same as last case

Why Was Nothing Printed? if (c == a/b) if (c == a/b) Now surely we should get the same computation. Now surely we should get the same computation. Maybe not, compiler may be clever enough to know that a/b is 0.1 in one case and not in the other. Maybe not, compiler may be clever enough to know that a/b is 0.1 in one case and not in the other.

Now Let’s Get Something Printed! Read in value of a at run time Read in value of a at run time Read in value of b at run time Read in value of b at run time Compiler knows nothing Compiler knows nothing Now we will get some output or else! Now we will get some output or else! c = a / b; if (c == a/b) printf (“This will print!”);

Still Nothing Printed!!! How can that be How can that be First a bit of background First a bit of background Typically we have two or more different precisions of floating-point values, with different length mantissas Typically we have two or more different precisions of floating-point values, with different length mantissas In registers we use only the higher precision form, expanding on a load, rounding on a store. In registers we use only the higher precision form, expanding on a load, rounding on a store.

What Happened? c = a / b; if (c == a/b) printf (“This will print!”); c = a / b; if (c == a/b) printf (“This will print!”); First compute a/b in high precision First compute a/b in high precision Now round to fit in low precision c, loosing significant bits Now round to fit in low precision c, loosing significant bits Compute a/b in high precision into a register, load c into a register, expanding Compute a/b in high precision into a register, load c into a register, expanding Comparison does not say equal Comparison does not say equal

Surprises in Precision Let’s compute x**4 Let’s compute x**4 Two methods: Two methods: Result = x*x*x*x; Result = x*x*x*x; Result = (x*x)**2 Result = (x*x)**2 Second has only two multiplications, instead of 3, must be more accurate. Second has only two multiplications, instead of 3, must be more accurate. Nope, first is more accurate! Nope, first is more accurate!

Subtleties of Rounding Suppose we insist on floating-point operations being properly rounded. Suppose we insist on floating-point operations being properly rounded. What does properly rounded mean for 0.5 What does properly rounded mean for 0.5 Typical rule, round up always if half way Typical rule, round up always if half way Introduces Bias Introduces Bias Some computations sensitive to this bias Some computations sensitive to this bias Computation of orbit of pluto significantly off because of this problem Computation of orbit of pluto significantly off because of this problem

Moral of this Story Floating-point is full of surprises Floating-point is full of surprises If you base your expectations on real arithmetic, you will be surprised If you base your expectations on real arithmetic, you will be surprised On any given machine, floating-point operations are well defined On any given machine, floating-point operations are well defined But may be more or less peculiar But may be more or less peculiar But the semantics will differ from machine to machine But the semantics will differ from machine to machine

What to do in High Level Languages We can punt. We just say that floating-point numbers are some approximation of real numbers, and that the results of floating-point operations are some approximation of the real results. We can punt. We just say that floating-point numbers are some approximation of real numbers, and that the results of floating-point operations are some approximation of the real results. Nice and simple from a language definition point of view Nice and simple from a language definition point of view Fortran and C historically did this Fortran and C historically did this Not so simple for a poor application programmer Not so simple for a poor application programmer

Doing a Bit Better Parametrize the machine model of floating-point. What exponent range does it have, what precision of the mantissa. Parametrize the machine model of floating-point. What exponent range does it have, what precision of the mantissa. Define fpt model in terms of these parameters. Define fpt model in terms of these parameters. Insist on results being accurate where possible, or one of two end points it not. Insist on results being accurate where possible, or one of two end points it not. This is the approach of Ada This is the approach of Ada

Doing Quite a Bit Better What if all machines had exactly the same floating-point model? What if all machines had exactly the same floating-point model? IEEE floating-point heads in that direction IEEE floating-point heads in that direction Precisely defines two floating-point formats (32-bit and 64-bit) and precisely defines operations on them. Precisely defines two floating-point formats (32-bit and 64-bit) and precisely defines operations on them.

More on IEEE We could define our language to require IEEE semantics for floating-point. We could define our language to require IEEE semantics for floating-point. But what if the machine does not efficiently implement IEEE But what if the machine does not efficiently implement IEEE For example, x86 implements the two formats, but all registers have an 80-bit format, so you get extra precision For example, x86 implements the two formats, but all registers have an 80-bit format, so you get extra precision Which sounds good, but is as we have seem a possible reason for suprising behavior. Which sounds good, but is as we have seem a possible reason for suprising behavior.

IEEE and High Level Languages Java and Python both expect/require IEEE semantics for arithmetic. Java and Python both expect/require IEEE semantics for arithmetic. Java wants high efficiency, which causes a clash if the machine does not support IEEE in the “right” way. Java wants high efficiency, which causes a clash if the machine does not support IEEE in the “right” way. Java is potentially inefficient on x86 machines. Solution: cheat  Java is potentially inefficient on x86 machines. Solution: cheat  Python requires IEEE too, but does not care so much about efficiency. Python requires IEEE too, but does not care so much about efficiency.