Computer Arithmetic Multiplication, Floating Point

Slides:



Advertisements
Similar presentations
Fixed Point Numbers The binary integer arithmetic you are used to is known by the more general term of Fixed Point arithmetic. Fixed Point means that we.
Advertisements

Chapter Three.
Arithmetic in Computers Chapter 4 Arithmetic in Computers2 Outline Data representation integers Unsigned integers Signed integers Floating-points.
CSCE 212 Chapter 3: Arithmetic for Computers Instructor: Jason D. Bakos.
Computer Engineering FloatingPoint page 1 Floating Point Number system corresponding to the decimal notation 1,837 * 10 significand exponent A great number.
1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II.
Lecture 16: Computer Arithmetic Today’s topic –Floating point numbers –IEEE 754 representations –FP arithmetic Reminder –HW 4 due Monday 1.
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
1 Ó1998 Morgan Kaufmann Publishers Chapter 4 計算機算數.
1  2004 Morgan Kaufmann Publishers Chapter Three.
1 Chapter 4: Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture Assembly Language and.
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
MIPS Architecture Multiply/Divide Functions & Floating Point Chapter 4
1 ECE369 Chapter 3. 2 ECE369 Multiplication More complicated than addition –Accomplished via shifting and addition More time and more area.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Hao Ji.
CPSC 321 Computer Architecture ALU Design – Integer Addition, Multiplication & Division Copyright 2002 David H. Albonesi and the University of Rochester.
CS/COE0447 Computer Organization & Assembly Language
Computer Arithmetic Nizamettin AYDIN
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
Computer Arithmetic.
CEN 316 Computer Organization and Design Computer Arithmetic Floating Point Dr. Mansour AL Zuair.
Computing Systems Basic arithmetic for computers.
ECE232: Hardware Organization and Design
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /14/2013 Lecture 16: Floating Point Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
Lec 13Systems Architecture1 Systems Architecture Lecture 13: Integer Multiplication and Division Jeremy R. Johnson Anatole D. Ruslanov William M. Mongan.
07/19/2005 Arithmetic / Logic Unit – ALU Design Presentation F CSE : Introduction to Computer Architecture Slides by Gojko Babić.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
1 EGRE 426 Fall 08 Chapter Three. 2 Arithmetic What's up ahead: –Implementing the Architecture 32 operation result a b ALU.
1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.
Lecture 9: Floating Point
Conversion to Larger Number of Bits Ex: Immediate Field (signed 16 bit) to 32 bit Positive numbers have implied 0’s to the left. So, put 16 bit number.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
FLOATING POINT ARITHMETIC P&H Slides ECE 411 – Spring 2005.
Computer Arithmetic See Stallings Chapter 9 Sep 10, 2009
1 ELEN 033 Lecture 4 Chapter 4 of Text (COD2E) Chapters 3 and 4 of Goodman and Miller book.
Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic.
순천향대학교 정보기술공학부 이 상 정 1 3. Arithmetic for Computers.
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
1 CPTR 220 Computer Organization Computer Architecture Assembly Programming.
Chapter 9 Computer Arithmetic
William Stallings Computer Organization and Architecture 8th Edition
Floating Point Representations
Computer Architecture & Operations I
Morgan Kaufmann Publishers Arithmetic for Computers
Lecture 9: Floating Point
Floating-Point Arithmetic
Floating Point Number system corresponding to the decimal notation
CS 232: Computer Architecture II
Floating-Point Arithmetic
CS/COE0447 Computer Organization & Assembly Language
Morgan Kaufmann Publishers
William Stallings Computer Organization and Architecture 7th Edition
Floating Point Arithmetics
Arithmetic for Computers
CDA 3101 Spring 2016 Introduction to Computer Organization
CSCE 350 Computer Architecture
CDA 3101 Summer 2007 Introduction to Computer Organization
Arithmetic Logical Unit
How to represent real numbers
How to represent real numbers
Computer Architecture
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
Digital System Design II 数字系统设计2
Chapter 3 Arithmetic for Computers
Morgan Kaufmann Publishers Arithmetic for Computers
Number Representation
Presentation transcript:

Computer Arithmetic Multiplication, Floating Point ARCHITECTURE Computer Arithmetic Multiplication, Floating Point (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3rd Ed., Morgan Kaufmann, 2007)

COURSE CONTENTS Introduction Instructions Computer Arithmetic Performance Processor: Datapath Processor: Control Pipelining Techniques Memory Input/Output Devices

Multiplication Floating Point Numbers Floating Point Arithmetic COMPUTER ARITHMETIC Multiplication Floating Point Numbers Floating Point Arithmetic

Multiplication More complicated than addition accomplished via shifting and addition More time and more chip area Let's look at pencil-paper algorithm (both unsigned numbers) 0111 (multiplicand) x 1011 (multiplier) 0111 0000 01001101 (product) n-bit multiplicand x m-bit multiplier gives n+m bit product Negative numbers: convert and multiply in positive forms, then adjust sign of results

Multiplier: First Version 6 4 - b i t A L U C o n r l e s M u p S h f g P d c W a 3 2 D o n e 1 . T s t M u l i p r a A d m c h P g 2 S f b 3 ? = N : < Y Example: 0010 x 0011 = 0000 0110

Multiplier: Second Version 1 . T s t M u l i p r a A d m c h f P g 2 S b 3 ? = N : < Y M u l t i p e r S h f g W 3 2 b s 6 4 c a n d - A L U P o C Example: 0010 x 0011 = 0000 0110

Multiplier: Final Version D o n e 1 . T s t P r d u c a A m l i p h f g 2 S b 3 ? = N : < Y C o n t r l e s W i 3 2 b 6 4 S h f g P d u c M p a - A L U Example: 0010 x 0011 = 0000 0110

Signed Multiplication Negative numbers: convert and multiply in positive forms, then adjust sign of results It turns out that our final version multiplier works for signed numbers; however, sign extension must be applied in shifting signed numbers

Multiplication: Booth Algorithm If multiplier contains long strings of 1s: slow down the process Booth: replace each string of 1s with an initial subtract when we first see a 1 and then later add when we see the bit after the last 1 Example: replace 0011110 by 01000(-1)0 For long strings of 1s, Booth algorithm is faster (reduce the number of additions) Booth algorithm handles signed numbers as well (in the same way) Booth algorithm:

Booth Algorithm: Example 1 Both operands positive: 0010 x 0110 = 0000 1100 Append a 0 to the right of multiplier at start Examine 2 bits each stage and perform sub/add/no_op according to Booth algorithm Multiplicand subtracts/adds to the most significant 4 bits of product Note the sign extension in right shift (arithmetic right shift)

Booth Algorithm: Example 2 One operand positive, one negative: 0010 x 1101 = 1111 1010 --> same procedure (nice!): Append a 0 to the right of multiplier at start Examine 2 bits each stage and perform sub/add/no_op according to Booth algorithm Multiplicand subtracts/adds to the most significant 4 bits of product Note the sign extension in right shift (arithmetic right shift)

MIPS Multiplication Instructions Special purpose registers Hi and Lo, each 32 bits  64 bits to contain product Instructions (pseudoinstructions) mult: signed multiplication mult $s2, $s3 #Hi, Lo  $s2 x $s3 multu: unsigned multiplication mflo: to fetch integer 32-bit product (move from Lo) mflo $s1 #s1  Lo mfhi: used with mflo to fetch 64-bit product to registers MIPS multiply instructions ignore overflow

Floating Point Numbers We need a way to represent numbers with fractions, e.g., 3.1416 very small numbers, e.g., .000000001; very large numbers, e.g., 3.15576 ´ 109 Hence we need floating point representation, in addition to fixed point representation that we have already studied Floating point representation: sign, exponent, significand (fraction): (-1)sign  fraction  2exponent more bits for fraction gives more accuracy more bits for exponent increases range IEEE 754 floating point standard (in virtually every computer): single precision: 1 bit sign, 8 bit exponent, 23 bit fraction (24 bit significand = 1 + fraction) (“1” is implied) double precision: 1 bit sign, 11 bit exponent, 52 bit fraction (53 bit significand)

IEEE 754 Floating-Point Standard Single precision: s 31 30 23 8 bit exponent E 22 23 bit fraction sign Double precision: s 31 30 20 11 bit exponent E 19 20 bit fraction 32 bit fraction (continued)

IEEE 754 Floating-Point Standard Leading “1” bit of significand is implicit (normalized form), hence significand is actually 24 bits (or 53 bits in double precision) Fractional portion of the significand represents a fraction between 0 and 1 (each bit from left to right has weight 2-1, 2-2, …) Since 0 has no leading 1, it is given the reserved exponent value 0 so that hardware won’t attach a leading 1 to it Exponent is “biased” (biased notation) to make sorting easier all 0s is smallest exponent (most negative) all 1s is largest (most positive) bias of 127 for single precision and 1023 for double precision Summary: (–1)sign ´ (1 + fraction) ´ 2exponent – bias Example: -0.75ten decimal: -0.75 = -3/4 = -3/22 binary: -0.11 = -1.1 x 2-1 floating point: 126-127=-1  exponent = 126 = 01111110 sign: 1 (negative) IEEE single precision: 10111111010000000000000000000000

IEEE 754 Floating-Point Standard Example: 5.0 decimal: 5 binary: 101 = 1.01 x 22 floating point: 129 - 127 = 2  exponent = 129 = 10000001 sign: 0 (positive) IEEE single precision: 0100 0000 1010 0000 0000 0000 0000 0000

Floating-Point Addition Steps (example: 0.5 + -0.4375  1.0002 x 2-1 + (-1.1102 x 2-2)) (assume 4 bits of precision in significand for the sake of simplicity) Step 1: Shift right the significand of the number with the lesser exponent until its exponent matches the larger number -1.1102 x 2-2  -0.1112 x 2-1 Step 2: Add the significands 1.0002 x 2-1 + (-0.1112 x 2-1) = 0.0012 x 2-1 Step 3: Normalize the sum, checking for overflow or underflow 1.0002 x 2-4 since 127  4  -126, there is no overflow or underflow (recall that the pattern of all 0s in exponent is reserved and used to represent 0, pattern of all 1s in exponent represents values & situations outside the scope of normal floating point numbers; hence for single precision, max exponent is 127 and min exponent is -126) the biased exponent would be -4 + 127 = 123 Step 4: Round the sum the sum already fits in 4 bits, so no change to the bits due to rounding final result is 1.0002 x 2-4 (which is 0.0625)

Floating-Point Adder Hardware Small ALU 1 Exp difference Shift right Big ALU Increment or decrement Shift left or right Rounding hardware s exp significand To control Compare exponents Shift smaller number right Add Normalize Round Control

Floating-Point Multiplication Steps (example: 0.5 x -0.4375  (1.0002 x 2-1) x (-1.1102 x 2-2)) assume 4 bits of precision in significands for the sake of simplicity Step 1: Add exponents without bias, then bias the result -1 + (-2) = -3  -3 + 127 = 124 Step 2: Multiply the significands 1.0002 x 1.1102 = 1.1100002  limit to 4 bits, so 1.1102 Hence product is 1.1102 x 2-3 Step 3: Normalize the product, checking for overflow or underflow 1.1102 x 2-3 already normalized since 127  -3  -126, or 254  124 (biased form)  1, there is no overflow or underflow Step 4: Round the product the sum already fits in 4 bits, so no change to the bits due to rounding Step 5: Produce the sign bit, based on the signs of the operands since the signs of operands differ, sign of product should be negative final result is -1.1102 x 2-3 (which is -0.21875)

MIPS Floating-Point Instructions MIPS supports IEEE 754 single & double precision formats MIPS has a separate set of floating-point registers $f0 - $f31, each 32 bits (a double precision is an even-odd pair using even register number as name) Instructions add.s, add.d: floating-point addition (single, double) add.s $f2, $f4, $f6 #$f2  $f4 + $f6 sub.s, sub.d: floating-point subtraction mul.s, mul.d, div.s, div.d: floating-point multiplication & division c.x.s, c.x.d, bclt, bclf: floating-point compare, branch lwc1, swc1: load/store 32-bit floating-point number lwc1 $f1, 100($s2) #$f1  Memory[$s2+100]

Floating Point Complexities Operations are somewhat more complicated than those of fixed point In addition to overflow we can have “underflow” Accuracy problem: IEEE 754 keeps two extra bits, guard and round four rounding modes positive divided by zero yields “infinity” zero divide by zero yields “not a number” (NaN) other complexities Implementing the standard can be tricky the Pentium bug (floating-point divide flaw) in 1994 (discovered by a math professor) – cost Intel $500 million! another bug in floating-point-to-integer store instruction in Pentium Pro & Pentium II in 1997 – this time Intel corrected it quickly IEEE 754 / 854 (practically the same)

Data Types

Special Symbols in Floating Point

Chapter Summary Additional MIPS instructions The design of an ALU Carry lookahead adder Multiplication Computer arithmetic is constrained by limited precision (accuracy vs range) Bit patterns have no inherent meaning but standards do exist two’s complement IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation).