Download presentation
1
CHAPTER 3 Floating Point
2
Representation of Fractions (1/2)
With base 10, we have a decimal point to separate integer and fraction parts to a number. = 2x x x10-4 xx.yyyy 101 100 10-1 10-2 10-3 10-4
3
Representation of Fractions (2/2)
“Binary Point” like decimal point signifies boundary between integer and fractional parts: xx.yyyy 21 20 2-1 2-2 2-3 2-4 Example 6-bit representation: = 1x21 + 1x x2-3 = If we assume “fixed binary point”, range of 6-bit representations with this format: 0 to (almost 4)
4
Fractional Powers of 2 i 2-i 0 1.0 1 0.5 1/2 0.25 1/4 0.125 1/8
0.5 1/2 0.25 1/4 /8 /16 /32
5
Representation of Fractions with Fixed Pt.
What about addition and multiplication? Addition is straightforward: Multiplication a bit more complex: 00 000 000 00 0110 0 00000 Whole Fraction Where’s the answer, 0.11? (need to remember where point is)
6
Representation of Fractions
So far, in our examples we used a “fixed” binary point what we really want is to “float” the binary point. Why? Floating binary point most effective use of our limited bits (and thus more accuracy in our number representation): example: put into binary. Represent with 5-bits, choosing where to put the binary point. … … Store these bits and keep track of the binary point 2 places to the left of the MSB Any other solution would lose accuracy! With floating point rep., each number carries an exponent field recording the location of its binary point. The binary point is not stored, so very large and very small numbers can be represented.
7
Scientific Notation (in Decimal)
significand exponent x 1023 decimal point radix (base) Normalized form: no leadings 0s (exactly one non-zero digit to left of decimal point) Alternatives to representing 1/1,000,000,000 Normalized: 1.0 x 10-9 Not normalized: 0.1 x 10-8,10.0 x 10-10
8
Scientific Notation (in Binary)
significand exponent 1.02 x 2-1 “binary point” radix (base) Computer arithmetic that supports binary scientific notation is called floating point. Corresponds to C++ data types float, double
9
Floating Point Representation (1/2)
Normalized: s1.xxxx…xxx2*2yyy…y2 Word Sizes (32 or 64 bits) 31 S Exponent 30 23 22 Significand 1 bit 8 bits 23 bits S represents sign binary Exponent yyyy binary significand/fraction -- xxxx Represent numbers as small as 2.0 x to as large as 2.0 x 1038
10
Floating Point Representation (2/2)
What if result too large? (> 2.0x1038 , < -2.0x1038 ) Overflow! Positive exponent larger than represented in 8-bit exponent field What if result too small? (>0 & < 2.0x10-38 , <0 & > - 2.0x10-38 ) Underflow! Negative exponent larger than represented in 8-bit exponent field What would help reduce chances of overflow and/or underflow? (more bits) overflow underflow overflow 2x10-38 2x1038 1 -1 -2x10-38 -2x1038
11
Double Precision Representation
Next Multiple of Word Size (64 bits) 31 S Exponent 30 20 19 Significand 1 bit 11 bits 20 bits Significand (cont’d) 32 bits Double Precision (vs. Single Precision) C++ variable declared as double Represent numbers almost as small as 2.0 x to almost as large as 2.0 x 10308 But primary advantage is greater accuracy (#digits) due to larger significand
12
IEEE 754 Floating Point Standard (1/2)
Single Precision (DP similar): Sign bit: 1 means negative, 0 means positive Significand: To pack more bits, leading 1 implied for normalized numbers bits single, bits double always true when normalized: 0 < significand < 1 Note: zero has no leading 1, so reserve exponent value 0 just for number ZERO 31 S Exponent 30 23 22 Significand 1 bit 8 bits 23 bits
13
IEEE 754 Floating Point Standard (2/2)
Biased Exponent Notation: bias is subtracted to get real exponent value Uses bias of 127 for single prec. Subtract 127 from Exponent field to get actual value Uses 1023 bias for double precision Bias = 2(#expbits – 1) – 1 31 S Exponent 30 23 22 Significand 1 bit 8 bits 23 bits Value = (-1)S x (1 + Significand) x 2(Exponent-127)
14
Example: Converting Binary FP to Decimal
Sign: 0 => positive Exponent: two = 104ten Bias adjustment: = -23 Significand: 1 + 1x2-1+ 0x x x x = = Represents: *2-23 ~ 1.986* (about 2/10,000,000)
15
Converting Decimal Fraction to FP (1/5)
Simple Case: If denominator of fraction is a power of 2 (2, 4, 8, 16, etc.), then it’s easy. Show MIPS representation of -0.75 -0.75 = -3/4 -112/1002 = -0.11two Normalized to x 2-1 (-1)S x (1 + significand) x 2(Exponent-127) (-1)1 x ( ) x 2(-1) 1
16
Converting Decimal to FP (2/5)
Given decimal fraction F Algorithm to convert F to binary fraction. for (k=1 to #bitsneeded) { dk = int ( 2 x F); F = fraction (2 x F) } F = k=1: F = 0.845 2 x F = 1.690 d1 = 1 F = 0.690 k=4: F = 0.760 2 x F = 1.520 d4 = 1 F = 0.520 k=2: F = 0.690 2 x F = 1.380 d2 = 1 F = 0.380 k=3: F = 0.380 2 x F = 0.760 d3 = 0 F = 0.760
17
Converting Decimal to FP (3/5)
The number of binary digits may exceed the #bits in the significand The binary fraction is truncated loss of accuracy Now, to normalize: Now, determine exponent: Finally, represent the number: x 2-1 bias = 127 expon = = 126 =
18
Converting Decimal to FP (4/5)
Fact: All rational numbers have a repeating pattern when written out in decimal. Fact: This still applies in binary. To finish conversion: Write out binary number with repeating pattern. Cut it off after correct number of bits (different for single v. double precision). Derive Sign, Exponent and Significand fields.
19
Converting Decimal to FP (5/5)
x 101 Denormalize: Convert integer part: 23 = 16 + ( 7 = 4 + ( 3 = 2 + ( 1 ) ) ) = Convert fractional part: = ( = ( ) ) = Put parts together and normalize: = x 24 Convert exponent: = 1
20
Understanding the Significand (1/2)
Method 1 (Fractions): In decimal: 34010/ 3410/10010 In binary: 1102/10002 = 610/ 112/1002 = 310/410 Advantage: less purely numerical, more thought oriented; this method usually helps people understand the meaning of the significand better
21
Understanding the Significand (2/2)
Method 2 (Place Values): Convert from scientific notation In decimal: = (1x100) + (6x10-1) + (7x10-2) + (3x10-3) + (2x10-4) In binary: = (1x20) + (1x2-1) + (0x2-2) + (0x2-3) + (1x2-4) Interpretation of value in each position extends beyond the decimal/binary point Advantage: good for quickly calculating significand value; use this method for translating FP numbers
22
Peer to Peer Exercise What is the decimal equivalent of:
1 S Exponent Significand (-1)S x (1 + Significand) x 2(Exponent-127) (-1)1 x ( ) x 2( ) -1 x (1.111) x 2(2) -7.510
23
Peer to Peer Exercise What is the decimal equivalent of:
1 S Exponent Significand (-1)S x (1 + Significand) x 2(Exponent-127) (-1)1 x ( ) x 2( ) -1 x (1.111) x 2(2) -111.1 -7.5ten 1: : -7 3: : -7 * 2^129 5: -129 * 2^7
24
Precision and Accuracy
Don’t confuse these two terms! Precision is the number of bits in a computer word used to represent a value. Accuracy is a measure of the difference in value between the actual (decimal) value and its computer representation. High precision permits high accuracy but doesn’t guarantee it. It is possible to have high precision but low accuracy. Example: float pi = 3.14; pi will be represented using all 24 bits of the significant (highly precise), but is only an approximation (not accurate). pi is actually some infinitely non-repeating number (transcendental), not 3.14
25
Representation for 0 Represent 0? Why two zeroes? exponent all zeroes
significand all zeroes too What about sign? +0: -0: Why two zeroes? Helps in some limit comparisons Ask math majors
26
Representation for ± ∞ In FP, divide by 0 should produce ± ∞, not overflow. Why? OK to do further computations with ∞ e.g., X/0 > Y may be a valid comparison Ask math majors IEEE 754 represents ± ∞ Most positive exponent reserved for ∞ Significand all zeroes
27
Special Numbers What have we defined so far? (Single Precision)
Exponent Significand Object 0 nonzero ??? anything +/- fl. pt. # /- ∞ 255 nonzero ???
28
Representation for Not a Number
What is sqrt(-4.0)or 0/0? If ∞ not an error, these shouldn’t be either. Called Not a Number (NaN) Exponent = 255, Significand nonzero Why is this useful? Symbol for typical invalid operations Hope NaNs help with debugging? Programmers may postpone tests or decisions to a later time They contaminate: op(NaN, X) = NaN
29
Representation for Denorms (1/2)
Problem: There’s a gap between 0 and the smallest representable FP number These numbers can not be normalized doing so overflows the exponent Smallest representable pos num: a = 1.0… 2 * = 2-126 Second smallest representable pos num: b = 1.000……1 2 * = b a + - Gaps!
30
Representation for Denorms (2/2)
Solution: Consider: Exponent = 0, Significand != 0 Denormalized number: no leading 1, implicit exponent = -126. Smallest representable pos num: a = = x 2-126 + -
31
Summary of Special Numbers
Reserve exponents, significands: Exponent Significand Object 0 nonzero Denorm anything +/- fl. pt. # /- ∞ 255 nonzero NaN
32
Rounding Math on real numbers we worry about rounding to fit result in the significant field. FP hardware carries 2 extra bits of precision, and rounds for proper value Rounding occurs when… converting double to single precision converting floating point # to an integer Intermediate step if necessary
33
IEEE Four Rounding Modes
Round towards + ∞ ALWAYS round “up”: 2.1 3, -2.1 -2 Round towards - ∞ ALWAYS round “down”: 1.9 1, -1.9 -2 Round towards 0 (i.e., truncate) Just drop the last bits Round to (nearest) even (default) Normal rounding, almost: 2.5 2, 3.5 4 Insures fairness on calculation Half the time we round up, other half down Also called unbiased
34
Floating Point Addition & Subtraction
Algorithm: 1. Shift smaller number right to match exponents 2. Add significands 3. Normalize result, checking for over/underflow 4. Round result significand 5. Normalize result again, checking for over/underflow Subtraction essentially the same
35
Floating Point Addition & Subtraction
Example (4 decimal digit significand): 9.234 10-2 = 10-1 = (Extra position to round accurately) = 10-1 = 100
36
Floating Point Multiplication
Algorithm: 1. Multiply significands (unsigned integer mult & fix sign) 2. Add exponents (signed biased integer add) 3. Normalize result, checking for over/underflow 4. Round result significand 5. Normalize result again, checking for over/underflow
37
Floating Point Multiplication
Example (4 decimal digit significand): 3.501 10-3 106 = 103 = 104 = 104
38
Floating Point Division
Algorithm: 1. Divide significands (unsigned integer div & fix sign) 2. Subtract exponents (signed biased integer sub) 3. Normalize result, checking for over/underflow 4. Round result significand 5. Normalize result again, checking for over/underflow
39
Floating Point Division
Example (4 decimal digit significand): 4.010 10-3 / 106 = 4.010/ * 10-9 = 10-9 = 10-10 = 10-10 = 10-9
40
MIPS Floating Point Architecture (1/4)
Separate floating point instructions: Single Precision: add.s, sub.s, mul.s, div.s Double Precision: add.d, sub.d, mul.d, div.d These are far more complicated than their integer counterparts so they can take much longer to execute
41
MIPS Floating Point Architecture (2/4)
Problems: Inefficient to have different instructions take vastly differing amounts of time. Generally, a particular piece of data will not change from float to int, or vice versa, within a program. Only one type of instruction will be used on it. Some programs do no floating point calculations It takes lots of hardware relative to integers to do floating point fast
42
MIPS Floating Point Architecture (3/4)
1990 Solution: Make a completely separate chip that handles only FP. Coprocessor 1: FP chip contains bit registers: $f0, $f1, … most of the registers specified in .s and .d instruction refer to this set separate load and store: lwc1 and swc1 (“load word coprocessor 1”, “store …”) Double Precision: by convention, even/odd pair contain one DP FP number: $f0/$f1, $f2/$f3, … , $f30/$f31 Even register is the name
43
MIPS Floating Point Architecture (4/4)
1990 Computer actually contains multiple separate chips: Processor: handles all the normal stuff Coprocessor 1: handles FP and only FP; more coprocessors?… Yes, later Today, FP coprocessor integrated with CPU, or cheap chips may leave out FP HW Instructions to move data between main processor and coprocessors: mfc0, mtc0, mfc1, mtc1, etc. Appendix A contains many more FP ops
44
Things to Remember Floating Point numbers approximate values that we want to use. IEEE 754 Floating Point Standard is most widely accepted standard for FP numbers New MIPS registers($f0-$f31), instructions: – Single Precision (32 bits, 2x10-38… 2x1038): add.s, sub.s, mul.s, div.s –Double Precision (64 bits , 2x10-308…2x10308): add.d, sub.d, mul.d, div.d Type is not associated with data, bits have no meaning unless given in context (instruction)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.