Data Representation: Floating Point for Real Numbers Computer Organization and Assembly Language: Module 11.

Slides:



Advertisements
Similar presentations
Computer Engineering FloatingPoint page 1 Floating Point Number system corresponding to the decimal notation 1,837 * 10 significand exponent A great number.
Advertisements

Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved Floating-point Numbers.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
2-1 Chapter 2 - Data Representation Computer Architecture and Organization by M. Murdocca and V. Heuring © 2007 M. Murdocca and V. Heuring Computer Architecture.
Binary Arithmetic Binary addition Binary subtraction
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
Floating Point Numbers
1 IEEE Floating Point Revision Guide for Phase Test Week 5.
Floating Point Numbers
Floating Point Numbers. CMPE12cGabriel Hugh Elkaim 2 Floating Point Numbers Registers for real numbers usually contain 32 or 64 bits, allowing 2 32 or.
Floating Point Numbers. CMPE12cCyrus Bazeghi 2 Floating Point Numbers Registers for real numbers usually contain 32 or 64 bits, allowing 2 32 or 2 64.
Chapter 5 Floating Point Numbers. Real Numbers l Floating point representation is used whenever the number to be represented is outside the range of integer.
2-1 Computer Organization Part Fixed Point Numbers Using only two digits of precision for signed base 10 numbers, the range (interval between lowest.
1 Module 2: Floating-Point Representation. 2 Floating Point Numbers ■ Significant x base exponent ■ Example:
Floating Point Numbers
Computer Science 210 Computer Organization Floating Point Representation.
Floating Point Numbers.  Floating point numbers are real numbers.  In Java, this just means any numbers that aren’t integers (whole numbers)  For example…
The IEEE Format for storing float (single precision) data type Use the “enter” key to proceed through the show.
Lecture 8 Floating point format
Simple Data Type Representation and conversion of numbers
Ch. 2 Floating Point Numbers
Information Representation (Level ISA3) Floating point numbers.
2-1 Chapter 2 - Data Representation Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Chapter Contents.
1 Lecture 5 Floating Point Numbers ITEC 1000 “Introduction to Information Technology”
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI 230 Information Representation: Negative and Floating Point.
Number Systems So far we have studied the following integer number systems in computer Unsigned numbers Sign/magnitude numbers Two’s complement numbers.
Computer Architecture
Data Representation - Part II. Characters A variable may not be a non-numerical type Character is the most common non- numerical type in a programming.
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
The Teacher CP4 Binary and all that… CP4 Revision.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
CSC 221 Computer Organization and Assembly Language
ITEC 1011 Introduction to Information Technologies 4. Floating Point Numbers Chapt. 5.
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI N305 Information Representation: Floating Point Representation.
1 Number Systems Lecture 10 Digital Design and Computer Architecture Harris & Harris Morgan Kaufmann / Elsevier, 2007.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Lecture notes Reading: Section 3.4, 3.5, 3.6 Multiplication
Fractions in Binary.
CSPP58001 Floating Point Numbers. CSPP58001 Floating vs. fixed point Floating point refers to a binary decimal representation where there is not a fixed.
1 COMS 161 Introduction to Computing Title: Numeric Processing Date: November 08, 2004 Lecture Number: 30.
Floating Point in Binary 1.Place Value Chart:
CS1Q Computer Systems Lecture 2 Simon Gay. Lecture 2CS1Q Computer Systems - Simon Gay2 Binary Numbers We’ll look at some details of the representation.
Floating Point Numbers
IT11004: Data Representation and Organization Floating Point Representation.
©Brooks/Cole, 2003 Chapter 3 Number Representation.
Fixed-point and floating-point numbers Ellen Spertus MCS 111 October 4, 2001.
1 CE 454 Computer Architecture Lecture 4 Ahmed Ezzat The Digital Logic, Ch-3.1.
FLOATING-POINT NUMBER REPRESENTATION
Floating Point Numbers
Floating Point Representations
Department of Computer Science Georgia State University
Fundamentals of Computer Science
Computer Science 210 Computer Organization
A brief comparison of integer and double representation
Introduction To Computer Science
Floating Point Number system corresponding to the decimal notation
Luddy Harrison CS433G Spring 2007
Number Representations
Floating Point Representation
Computer Science 210 Computer Organization
Storing Integers and Fractions
Floating Point Numbers
COMS 161 Introduction to Computing
Number Representations
Lecture 9: Shift, Mult, Div Fixed & Floating Point
Presentation transcript:

Data Representation: Floating Point for Real Numbers Computer Organization and Assembly Language: Module 11

Floating Point Representation u The IEEE-754 Floating Point Standard is a widely used floating point representation from among the many alternative formats u The representation of floating point numbers contains:  a mantissa (variant of a scaled, sign magnitude integer)  an exponent (8-bit, biased-127 integer) u In this way floating point representation resembles scientific notation  Any number N can be represented as M*10^e, where  e = floor(log 10 N)  M = N/(10^e)  1 < M < 10

 A number N represented in floating point is determined by the mantissa m, an exponent e, and its sign, s N = (-1) s * m * 2 e  If the sign is negative, s = 1. If the sign is positive, s = 0.  The mantissa is normalized, i.e., 1  m < 2 u In the IEEE-754 single precision format, the mantissa is represented with 23 bits (only the fractional part is stored  m = (+/-) 1.f 22 f 21 …f 1 f 0 u Double precision floating point works the same way, but the bit fields are larger: 1-bit sign, 11-bit exponent, 52 bits for the fractional part of the mantissa Floating Point Representation

Conversion to base-2 1.Break the decimal number into two parts: an integer and a fraction 2.Convert the integer into binary and place it to the left of the binary point 3.Convert the fraction into binary and place it to the right of the binary point 4.Write it in base-2 scientific notation and normalize

Convert to floating point representation 1. Convert 22 to binary = Convert.625 to binary 2*.625= *.25 = *.5= Thus = In base –2 scientific notation: *2 0 Normalized form: *2 4 Example =.101 2

IEEE-754 SPFP Representation u Given the floating point representation N = (-1) s * m * 2 e where m = 1.f 22 f 21 …f 1 f 0 u we can convert it to the IEEE-754 SPFP format using the relations: F = (m-1)*2 23 (hence F is an integer) E =e S =s SEF

Single-Precision Floating Point The IEEE-754 single precision format has 32 bits distributed as  0  E  255, thus the actual exponent e (interpreted as biased-127) is restricted so that -127  e  128  But e = -127 and e = 128 have special meaning SEF 1 823

Special values and the hidden bit  In IEEE-754, zero is represented by setting E = F = 0 regardless of the sign bit, thus there are two representations for zero: +0 and -0.  +  by S=0, E=255, F=0  -  by S=1, E=255, F=0  NaN or Not-a-Number by E=255, F  0 (may result from 0 divided by 0) u The leading 1 in the fraction is not represented. It is the hidden bit.

Converting to IEEE-754 SPFP 1.Convert into a normalized base-2 representation 2.Bias the exponent. The result will be E. 3.Put the values into the correct field. Note that only the fractional part of the mantissa is stored in F.

Example Convert to IEEE-754 SPFP format 1. In scientific notation: *2 0 Normalized form: * Bias the exponent: = = Place into the correct fields. S = 0 E = F = SE F

Example Convert to IEEE FPS format = * Normalized form: * Bias the exponent: = = Place into the correct fields. S = 0 E = F = SE F

Example Convert to IEEE FPS format (single precision) 2*.7 = *.4 = *.8 = *.6 = *.2 = *.4 = *.8 = *.6 = *.2 = =

1. In binary scientific notation: * 2 0 Normalized: * Bias the exponent: = = Place into the correct fields S = 1 E = F = SE F

Representing as hexadecimal u It is difficult for people to read binary  one bit pattern looks much like another u Raw data, which is not being interpreted as representing a particular data type, is often displayed using hexadecimal instead of binary u The final step in many IEEE-754 SPFP problems will be to convert the result to hexadecimal   C2A76666

u Given a single precision floating point number with bit fields S, E, and F (interpreted as unsigned integers), the value of the number is normally calculated as N = (-1) S (1 + F/2 23 )2 E-127 u This interpretation is not used when  E = 255 (+ , - , or NaN)  E = 0, F = 0 (+0 or –0)  What about E  0, F  0? Graceful underflow

u Given a single precision floating point number with bit fields S, E = 0, and F (interpreted as unsigned integers), the value of the number is calculated as N = (-1) S (0 + F/2 23 ) u This allows representation of numbers as small as , though each order of magnitude below results in loss of one bit of precision. Graceful underflow

u Normal interpretation: N = 2 (1 – 127) =  24 bits of precision (counting the hidden bit) u E = 0 interpretation: N = (.1 2 ) = (.5) =  Only 23 bits of precision u E =0 interpretation: N = ( ) = (.0625) =  Only 20 bits of precision Graceful underflow