COMP201 Computer Systems Floating Point Numbers. Floating Point Numbers  Representations considered so far have a limited range dependent on the number.

Slides:



Advertisements
Similar presentations
Fixed Point Numbers The binary integer arithmetic you are used to is known by the more general term of Fixed Point arithmetic. Fixed Point means that we.
Advertisements

Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Floating Point Numbers
CS 447 – Computer Architecture Lecture 3 Computer Arithmetic (2)
Floating Point Numbers. CMPE12cGabriel Hugh Elkaim 2 Floating Point Numbers Registers for real numbers usually contain 32 or 64 bits, allowing 2 32 or.
Computer ArchitectureFall 2007 © September 5, 2007 Karem Sakallah CS 447 – Computer Architecture.
Floating Point Numbers. CMPE12cCyrus Bazeghi 2 Floating Point Numbers Registers for real numbers usually contain 32 or 64 bits, allowing 2 32 or 2 64.
Chapter 5 Floating Point Numbers. Real Numbers l Floating point representation is used whenever the number to be represented is outside the range of integer.
Number Systems Standard positional representation of numbers:
Signed Numbers.
Floating Point Numbers
Floating Point Numbers
S. Barua – CPSC 240 CHAPTER 2 BITS, DATA TYPES, & OPERATIONS Topics to be covered are Number systems.
Computer ArchitectureFall 2008 © August 27, CS 447 – Computer Architecture Lecture 4 Computer Arithmetic (2)
Binary Number Systems.
School of Computer Science G51CSA 1 Computer Arithmetic.
Chapter3 Fixed Point Representation Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2009.
The Binary Number System
Data Representation Number Systems.
IT 251 Computer Organization and Architecture Introduction to Floating Point Numbers Chia-Chi Teng.
Information Representation (Level ISA3) Floating point numbers.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
Number Systems II Prepared by Dr P Marais (Modified by D Burford)
1 Lecture 5 Floating Point Numbers ITEC 1000 “Introduction to Information Technology”
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI 230 Information Representation: Negative and Floating Point.
IT253: Computer Organization
Number Systems So far we have studied the following integer number systems in computer Unsigned numbers Sign/magnitude numbers Two’s complement numbers.
Computing Systems Basic arithmetic for computers.
Computer Architecture
ECE232: Hardware Organization and Design
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
CH09 Computer Arithmetic  CPU combines of ALU and Control Unit, this chapter discusses ALU The Arithmetic and Logic Unit (ALU) Number Systems Integer.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
CSC 221 Computer Organization and Assembly Language
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI N305 Information Representation: Floating Point Representation.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Operators & Identifiers The Data Elements. Arithmetic Operators exponentiation multiplication division ( real ) division ( integer quotient ) division.
Binary Arithmetic.
Floating Point Numbers
Numbers in Computers.
Floating Point. Binary Fractions. Fixed point representation Scientific Notation. Floating point Single precision, Double precision. Textbook Ch.4.8 (p )
Computer Organization 1 Data Representation Negative Integers.
Fixed-point and floating-point numbers Ellen Spertus MCS 111 October 4, 2001.
Software Design and Development Storing Data Computing Science.
1 CE 454 Computer Architecture Lecture 4 Ahmed Ezzat The Digital Logic, Ch-3.1.
Floating Point Arithmetic – Part I
Floating Point. Binary Fractions.
Floating Point Representations
Lecture 6. Fixed and Floating Point Numbers
Chapter 6 Floating Point
Number Representations
ECEG-3202 Computer Architecture and Organization
Floating Point Numbers
Number Representations
Lecture 9: Shift, Mult, Div Fixed & Floating Point
Presentation transcript:

COMP201 Computer Systems Floating Point Numbers

Floating Point Numbers  Representations considered so far have a limited range dependent on the number of bits: 16 bits0 to for unsigned to for 2's complement. 32 bits0 to for unsigned to 's Complement  How do we represent the following numbers: mass of an electron = Kg Mass of the earth = Kg  Both these numbers have very few significant digits but are well beyond the range above.

Floating Point Numbers  We normally use scientific notation: Mass of electron = x Kg Mass of earth = x 1024 Kg These numbers can be split into two components:  Mantissa  Exponent e.g x Kg mantissa = exponent = -31

Floating Point Numbers  Also need to be able to represent negative numbers  Same approach can be taken with binary numbers i.e. a x re where a is the mantissa, r is the base( Radix) and e is the exponent  Notice, we are trading off precision and possibly making computation times longer, in exchange for being able to represent a larger range of numbers.

Defining a floating point number Several things must be defined, in order to define a floating point number:  Size of mantissa  Sign of mantissa  Size of exponent  Sign of exponent  Number base in use   Positive  31  Negative  10

Floating Point Numbers  Consider the following using eight digits and radix 10: MSDLSD Exponent (2 digits) Mantissa (5 digits) The high order bit is the sign bit (0 for + and 1 for - )

Floating Point Numbers But this has serious limitations! Range of numbers which can be represented is 10 0 – since only two digits are devoted to exponent. Precision is only 5 digits No provision for negative exponents

Floating Point Numbers (Continued) One solution for the limitations of range and negative exponents, is to decrease the positive range to 49, by applying an offset of 50 (excess-50 notation). Then, the range of numbers which can be expressed becomes: x to x Often implementations require that the most significant digit not be zero so restrict the most negative value to x and the all-0’s represents 0.

From the text: Excess-50 notation Range of represented numbers

Examples (from textbook) Problem: Convert to the standard format Write as x 10 3 Truncate at 5 digits: x 10 3 Final number: Problem: Convert to FP format. Write as x Zero fill to 5 digits Final number:  (the offset) NOTE: Plus is represented by 0; minus by 5 in this format.

Floating point in the computer Within the computer, we work in base 2, and use a format such as: NOTE: Bit order depends upon the particular computer

Numbers represented Using 32 bits to represent a number, with 1 sign bit, 8 bits for exponent, and the remaining (23 bits) for mantissa. In order to represent negative exponents, use excess-128 notation. Range of numbers represented is approximately to (in decimal terms)

But this is not the end! The precision of the mantissa can be improved from 23 bits to 24 bits by noticing that the MSB of the mantissa in a normalized binary number is always “1” And so, it can be implied, rather than expressed directly. The added complications (see below) are felt to be a good tradeoff for the added precision. This complication is the fact that certain numbers are too small to be normalized, and the number 0.0 cannot be represented at all! The solution is to utilize certain codes for these special cases.

Floating point numbers– the IEEE standard IEEE Standard 754  Most (but not all) computer manufactures use IEEE-754 format  Number represented: (-1) S * (1.M)*2 (E - Bias) 2 main formats: single and double SExponentMantissa 1823 Single Bias = 127 S ExponentMantissa Double Bias = 1023

And there are special cases to be considered: Zero, represented by +/ x 0.M 2E -127 x 1.M +/- infinity Special condition Exponent Mantissa 0+/- 0 0not any 255+/ not 0

Special cases (continued)  The thing to remember about the special cases, is that they are SPECIAL, that is, not expected to occur.  They cover numbers outside the range expected, by using some unlikely-to-be used codes: Special code for 0.0 Special code for numbers too small to be normalized… Special code for infinity  What you are expected to remember is that these special codes exist, and they limit the range of numbers that can be represented by the standard.

Final ranges represented to Approx to Similarly, the double-precision used sixty-four bits, and represents a range of approximately to

Floating Point Numbers IEEE Standard 754  Example: What Decimal number does the following IEEE floating point number represent? Sign = 0 (positive) Exponent = 2 (0x81 or 129 – 127 = 2) Mantissa = implied Final answer: or The bias

More…  What is the IEEE f.p. number Converted to decimal? Begin by dividing up the number into the fields of sign, exponent and mantissa: Then, convert the exponent ( = 131) and subtract the bias (127) to get the shift (4) Add in the “implied one” to the mantissa And shift 4 places, to get the final number , or in decimal. Mantissa (23 bits)

And more…  What is the decimal number converted to IEEE fp format? First, convert the number to binary: Then, normalize for 1.xxxx format: shift = 5 Add in the bias (127) and convert the total (132) to binary ( or 0x84) Assemble final number: I’ve added spaces for emphasis

Floating Point Application Floating point arithmetic is very important in certain types of application  Primarily scientific computation Many applications use no floating point arithmetic at all  Communications  Operating systems  Most graphics

Floating point implementation Floating point arithmetic may be implemented in software or hardware. It is much more complex than integer arithmetic.  Hardware instruction implementations require more gates than equivalent integer instructions and are often slower  Software implementations are generally very slow CPU Floating performance can be very different from Integer performance