Download presentation
Presentation is loading. Please wait.
Published byVictor Montgomery Modified over 9 years ago
1
1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II
2
2 An ALU (arithmetic logic unit) Let's build a 32-bit ALU to support the and and or instructions –we'll just build a 1 bit ALU, and use 32 of them One Implementation (Figure A): Use a mux with AND/OR gates Another possible Implementation (sum-of- products of formula) in Figure B: Why isn’t this a good solution? r0r0 r1r1 r2r2 r 30 r 31
3
3 Review: The Multiplexor (mux) Selects one of the inputs to be the output, based on a control input Lets build our ALU using a MUX: note: we call this a 4-input mux even though it has 6 inputs!
4
4 We will build a 1-bit ALU so some instructions from which we will build a 32 bit ALU for the instructions selected. Let's first look at a 1-bit adder: How could we build a 1-bit ALU for add, and, and or? How could we build a 32-bit ALU? ALU Implementations c out = a b + a c in + b c in sum = a xor b xor c in
5
5 Building a 32 bit ALU 1 bit ALU Result = a operation b 32 bit ALU a, b, Result are 32 bits each CarryIn CarryOut ALU a b Operation Result
6
6 Two's complement approach: just negate b and add 1. How do we negate? A very clever solution: What about subtraction (a – b) ?
7
7 Need to support the set-on-less-than instruction (slt) –remember: slt is an arithmetic instruction –produces a 1 if rs < rt and 0 otherwise –use subtraction: (a-b) < 0 implies a < b Need to support test for equality (beq $t5, $t6, $t7) –use subtraction: (a-b) = 0 implies a = b Tailoring the ALU to the MIPS
8
Supporting slt: set on less than Can we figure out the idea? We need 1 when the result of subtracting is negative. That is we need a 1 in the least significant bit and a 0 elsewhere when the result of subtracting has a 1 in its most significant bit. Also 0’s in all bits when the result of subtracting has a 0 in its most significant bit.
9
9 Supporting slt: set on less than
10
10 Test for equality: beq Notice control lines: 000 = and 001 = or 010 = add 110 = subtract 111 = slt a - b is 0 if all bits in a - b are 0. That is in the result, r = a - b NOT (r31 OR r30 OR … r0) Note: zero is a 1 when the result is zero!
11
11 Interface Representation of 32 bit ALU
12
12 Conclusion We can build an ALU to support the MIPS instruction set –key idea: use multiplexor to select the output we want –we can efficiently perform subtraction using two’s complement –we can replicate a 1-bit ALU to produce a 32-bit ALU Important points about hardware –all of the gates are always working –the speed of a gate is affected by the number of inputs to the gate –the speed of a circuit is affected by the number of gates in series (on the “critical path” or the “deepest level of logic”) Our primary focus: comprehension, however, –Clever changes to organization can improve performance (similar to using better algorithms in software) –we’ll look at two examples for addition and multiplication
13
13 Is a 32-bit ALU as fast as a 1-bit ALU? Is there more than one way to do addition? –two extremes: ripple carry and sum-of-products Can you see the ripple? How could you get rid of it? c 1 = b 0 c 0 + a 0 c 0 + a 0 b 0 c 2 = b 1 c 1 + a 1 c 1 + a 1 b 1 c 2 = c 3 = b 2 c 2 + a 2 c 2 + a 2 b 2 c 3 = c 4 = b 3 c 3 + a 3 c 3 + a 3 b 3 c 4 = Not feasible! Why? Problem: ripple carry adder is slow
14
14 An approach in-between our two extremes Motivation: – If we didn't know the value of carry-in, what could we do? –When would we always generate a carry? g i = a i b i –When would we propagate the carry? p i = a i + b i Did we get rid of the ripple? c 1 = g 0 + p 0 c 0 c 2 = g 1 + p 1 c 1 c 2 = c 3 = g 2 + p 2 c 2 c 3 = c 4 = g 3 + p 3 c 3 c 4 = Feasible! Why? Carry-lookahead adder
15
15 Fast Carry, First Level of Abstraction: Propagate and Generate c i+1 = (b i.c i ) + (a i.c i ) + (a i.b i ) = (a i b i ) +(a i +b i )c i = g i + p i c i, where g i = a i b i, and p i = a i + b i. c 1 = g 0 + p 0 c 0 c 2 = g 1 + p 1 c 1 = g 1 + p 1 g 0 + p 1 p 0 c 0 c 3 = g 2 + p 2 c 2 = g 2 + p 2 g 1 + p 2 p 1 g 0 + p 2 p 1 p 0 c 0 c 4 = g 3 + p 3 c 3 = g 3 + p 3 g 2 + p 3 p 2 g 1 + p 3 p 2 p 1 g 0 + p 3 p 2 p 1 p 0 c 0 c i+1 is 1 if an earlier adder generates a carry and all intermediate adders propagate the carry. Plumbing analogy for First Level
16
16 Fast Carry, Second Level of Abstraction: Carry look ahead Super Propagate P0 = p 3.p 2 p 1.p 0 P1 = p 7.p 6 p 5.p 4 P2 = p 11.p 10 p 9.p 8 P3 = p 15.p 14 p 13.p 12 Super Generate G0 = g 3 + p 3 g 2 + p 3 p 2 g 1 + p 3 p 2 p 1 g 0 G1 = g 7 + p 7 g 6 + p 7 p 6 g 5 + p 7 p 6 p 5 g 4 G2 = g 11 + p 11 g 10 + p 11 p 10 g 9 + p 11 p 10 p 9 g 8 G3 = g 15 + p 15 g 14 + p 15 p 14 g 13 + p 15 p 14 p 13 g 12 Generate final carry C1 = G0 + P0c 0 C2 = G1 + P1G0 + P1P0c 0 C3 = G2 + P2G1 + P2P1G0 + P2P1P0c 0 C4 = G3 + P3G2 + P3P2G1 + P3P2P1G0 + P3P2P1P0c 0 = c 16 Plumbing analogy for Next Level Carry Look ahead signals P0, G0
17
17 Use principle to build bigger adders The 16 bit adder with carry look ahead
18
18 More complicated than addition –accomplished via shifting and addition More time and more area Let's look at 3 versions based on grade school algorithm 0010 (multiplicand) x_1011 (multiplier) Negative numbers: convert and multiply –there are better techniques, we won’t look at them Multiplication
19
19 Multiplication: Implementation: 1 st Version Multiplier = 32 bits, Multiplicand = 64 bits Product = 64 bits. Initially Product is 64 bit Zero
20
20 Multiplication: Implementation: 1 st Version Problem with 1 st Version: 64 bit arithmetic, Half of multiplicand = 0!
21
21 Multiplication: Implementation: 2 nd Version Multiplier = 32 bits, Multiplicand = 32 bits Product = 64 bits. Product is 64 bit Zero
22
22 Multiplication: Implementation: 2 nd Version Problem with 2 nd Version: Half of product is always = 0!
23
23 Multiplication: Implementation: Final Version Multiplier = 32 bits, Multiplicand = 32 bits Product = 64 bits. Initially add Multiplier to Right half of Product i.e LeftProduct = 0, rightProduct = Multiplier
24
24 Multiplication: Implementation: Final Version
25
25 Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., 3.1416 –very small numbers, e.g.,.000000001 –very large numbers, e.g., 3.15576 10 9 Representation: –sign, exponent, significand: (–1) sign significand 2 exponent –more bits for significand gives more accuracy –more bits for exponent increases range IEEE 754 floating point standard: –single precision [1 word]: 8 bit exponent, 23 bit significand –double precision [2 words] : 11 bit exponent, 52 bit significand
26
26 IEEE 754 floating-point standard Leading “1” bit of significand is implicit Exponent is “biased” to make sorting easier –all 0s is smallest exponent all 1s is largest –bias of 127 for single precision and 1023 for double precision –summary: (–1) sign significand) 2 exponent – bias Example 1: –decimal: -.75 = -3/4 = -3/2 2 –binary: -.11 = -1.1 x 2 -1 –floating point: exponent = bias + -1 = 127 - 1 = 126 = 01111110 –IEEE single precision: 1 01111110 10000000000000000000000 Example 2: –decimal: 0.356 –binary: 0.01011011001 (can be more accurate) = 1.011011001 x 2 -2 –floating point: exponent = 127 – 2 = 125 = 01111101 –IEEE single precision: 0 01111101 01101100100000000000000
27
27 Floating Point Complexities Operations are somewhat more complicated (see text) In addition to overflow we can have “underflow” Accuracy can be a big problem –IEEE 754 keeps two extra bits, guard and round –four rounding modes –positive divided by zero yields “infinity” –zero divide by zero yields “not a number” –other complexities Implementing the standard can be tricky Not using the standard can be even worse –see text for description of 80x86 and Pentium bug!
28
28 Chapter Four Summary Numbers can be represented as bits. Computer arithmetic is constrained by limited precision Bit patterns have no inherent meaning but standards do exist –two’s complement –IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation). Arithmetic Algorithms have been presented. Logical representation of hardware to do some arithmetic operations have been given. We are ready to move on (and implement the datapath = ALU, registers).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.