1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II.

Slides:



Advertisements
Similar presentations
Chapter Three.
Advertisements

1 Chapter Three Last revision: 4/17/ Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.
©UCB CPSC 161 Lecture 6 Prof. L.N. Bhuyan
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 7 Arithmetic Logic Unit February 19,
Chapter 3 Arithmetic for Computers
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
Arithmetic II CPSC 321 E. J. Kim. Today’s Menu Arithmetic-Logic Units Logic Design Revisited Faster Addition Multiplication (if time permits)
Computer Structure - The ALU Goal: Build an ALU  The Arithmetic Logic Unit or ALU is the device that performs arithmetic and logical operations in the.
1  2004 Morgan Kaufmann Publishers Chapter Three.
Arithmetic II CPSC 321 Andreas Klappenecker. Any Questions?
1 Chapter 4: Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture Assembly Language and.
Chapter Four Arithmetic and Logic Unit
Arithmetic-Logic Units CPSC 321 Computer Architecture Andreas Klappenecker.
1  2004 Morgan Kaufmann Publishers Lectures for 3rd Edition Note: these lectures are often supplemented with other materials and also problems from the.
Arithmetic III CPSC 321 Andreas Klappenecker. Any Questions?
1 ECE369 Chapter 3. 2 ECE369 Multiplication More complicated than addition –Accomplished via shifting and addition More time and more area.
1  1998 Morgan Kaufmann Publishers Chapter Four Arithmetic for Computers.
Chapter 3 Arithmetic for Computers. Arithmetic Where we've been: Abstractions: Instruction Set Architecture Assembly Language and Machine Language What's.
(original notes from Prof. J. Kelly Flanagan)
1 Bits are just bits (no inherent meaning) — conventions define relationship between bits and numbers Binary numbers (base 2)
1 Modified from  Modified from 1998 Morgan Kaufmann Publishers Chapter Three: Arithmetic for Computers citation and following credit line is included:
Computer Arithmetic.
EGRE 426 Fall 09 Chapter Three
CS/COE0447 Computer Organization & Assembly Language
Computing Systems Basic arithmetic for computers.
ECE232: Hardware Organization and Design
1 ECE369 Sections 3.5, 3.6 and ECE369 Number Systems Fixed Point: Binary point of a real number in a certain position –Can treat real numbers as.
Chapter 6-1 ALU, Adder and Subtractor
07/19/2005 Arithmetic / Logic Unit – ALU Design Presentation F CSE : Introduction to Computer Architecture Slides by Gojko Babić.
1 EGRE 426 Fall 08 Chapter Three. 2 Arithmetic What's up ahead: –Implementing the Architecture 32 operation result a b ALU.
1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.
Csci 136 Computer Architecture II – Constructing An Arithmetic Logic Unit Xiuzhen Cheng
Computing Systems Designing a basic ALU.
1 Lecture 6 BOOLEAN ALGEBRA and GATES Building a 32 bit processor PH 3: B.1-B.5.
CDA 3101 Fall 2013 Introduction to Computer Organization The Arithmetic Logic Unit (ALU) and MIPS ALU Support 20 September 2013.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Computer Arithmetic See Stallings Chapter 9 Sep 10, 2009
1  2004 Morgan Kaufmann Publishers Performance is specific to a particular program/s –Total execution time is a consistent summary of performance For.
1 ELEN 033 Lecture 4 Chapter 4 of Text (COD2E) Chapters 3 and 4 of Goodman and Miller book.
Orange Coast College Business Division Computer Science Department CS 116- Computer Architecture Arithmetic: Part II.
1  2004 Morgan Kaufmann Publishers Lets Build a Processor Almost ready to move into chapter 5 and start building a processor First, let’s review Boolean.
Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic.
Addition, Subtraction, Logic Operations and ALU Design
순천향대학교 정보기술공학부 이 상 정 1 3. Arithmetic for Computers.
CS 232: Computer Architecture II Prof. Laxmikant (Sanjay) Kale Floating point arithmetic.
Prof. Hsien-Hsin Sean Lee
Arithmetic-Logic Units. Logic Gates AND gate OR gate NOT gate.
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
1 Lecture 11: Hardware for Arithmetic Today’s topics:  Logic for common operations  Designing an ALU  Carry-lookahead adder.
1 Arithmetic Where we've been: –Abstractions: Instruction Set Architecture Assembly Language and Machine Language What's up ahead: –Implementing the Architecture.
Computer Arthmetic Chapter Four P&H. Data Representation Why do we not encode numbers as strings of ASCII digits inside computers? What is overflow when.
9/23/2004Comp 120 Fall September Chapter 4 – Arithmetic and its implementation Assignments 5,6 and 7 posted to the class web page.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1.
EE204 L03-ALUHina Anwar Khan EE204 Computer Architecture Lecture 03- ALU.
By Wannarat Computer System Design Lecture 3 Wannarat Suntiamorntut.
1 CPTR 220 Computer Organization Computer Architecture Assembly Programming.
1 Chapter 3 Arithmetic for Computers Lecture Slides are from Prof. Jose Delgado-Frias, Mr. Paul Wettin, and Prof. Valeriu Beiu (Washington State University.
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
Computer System Design Lecture 3
Computer Arthmetic Chapter Four P&H.
CS 232: Computer Architecture II
CS/COE0447 Computer Organization & Assembly Language
Arithmetic Where we've been:
Arithmetic Logical Unit
Computer Arithmetic Multiplication, Floating Point
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
COMS 361 Computer Organization
October 8 Rules for Programming Assignments ASK QUESTIONS!
Chapter 4 計算機算數.
Presentation transcript:

1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II

2 An ALU (arithmetic logic unit) Let's build a 32-bit ALU to support the and and or instructions –we'll just build a 1 bit ALU, and use 32 of them One Implementation (Figure A): Use a mux with AND/OR gates Another possible Implementation (sum-of- products of formula) in Figure B: Why isn’t this a good solution? r0r0 r1r1 r2r2 r 30 r 31

3 Review: The Multiplexor (mux) Selects one of the inputs to be the output, based on a control input Lets build our ALU using a MUX: note: we call this a 4-input mux even though it has 6 inputs!

4 We will build a 1-bit ALU so some instructions from which we will build a 32 bit ALU for the instructions selected. Let's first look at a 1-bit adder: How could we build a 1-bit ALU for add, and, and or? How could we build a 32-bit ALU? ALU Implementations c out = a b + a c in + b c in sum = a xor b xor c in

5 Building a 32 bit ALU 1 bit ALU Result = a operation b 32 bit ALU a, b, Result are 32 bits each CarryIn CarryOut ALU a b Operation Result

6 Two's complement approach: just negate b and add 1. How do we negate? A very clever solution: What about subtraction (a – b) ?

7 Need to support the set-on-less-than instruction (slt) –remember: slt is an arithmetic instruction –produces a 1 if rs < rt and 0 otherwise –use subtraction: (a-b) < 0 implies a < b Need to support test for equality (beq $t5, $t6, $t7) –use subtraction: (a-b) = 0 implies a = b Tailoring the ALU to the MIPS

Supporting slt: set on less than Can we figure out the idea? We need 1 when the result of subtracting is negative. That is we need a 1 in the least significant bit and a 0 elsewhere when the result of subtracting has a 1 in its most significant bit. Also 0’s in all bits when the result of subtracting has a 0 in its most significant bit.

9 Supporting slt: set on less than

10 Test for equality: beq Notice control lines: 000 = and 001 = or 010 = add 110 = subtract 111 = slt a - b is 0 if all bits in a - b are 0. That is in the result, r = a - b NOT (r31 OR r30 OR … r0) Note: zero is a 1 when the result is zero!

11 Interface Representation of 32 bit ALU

12 Conclusion We can build an ALU to support the MIPS instruction set –key idea: use multiplexor to select the output we want –we can efficiently perform subtraction using two’s complement –we can replicate a 1-bit ALU to produce a 32-bit ALU Important points about hardware –all of the gates are always working –the speed of a gate is affected by the number of inputs to the gate –the speed of a circuit is affected by the number of gates in series (on the “critical path” or the “deepest level of logic”) Our primary focus: comprehension, however, –Clever changes to organization can improve performance (similar to using better algorithms in software) –we’ll look at two examples for addition and multiplication

13 Is a 32-bit ALU as fast as a 1-bit ALU? Is there more than one way to do addition? –two extremes: ripple carry and sum-of-products Can you see the ripple? How could you get rid of it? c 1 = b 0 c 0 + a 0 c 0 + a 0 b 0 c 2 = b 1 c 1 + a 1 c 1 + a 1 b 1 c 2 = c 3 = b 2 c 2 + a 2 c 2 + a 2 b 2 c 3 = c 4 = b 3 c 3 + a 3 c 3 + a 3 b 3 c 4 = Not feasible! Why? Problem: ripple carry adder is slow

14 An approach in-between our two extremes Motivation: – If we didn't know the value of carry-in, what could we do? –When would we always generate a carry? g i = a i b i –When would we propagate the carry? p i = a i + b i Did we get rid of the ripple? c 1 = g 0 + p 0 c 0 c 2 = g 1 + p 1 c 1 c 2 = c 3 = g 2 + p 2 c 2 c 3 = c 4 = g 3 + p 3 c 3 c 4 = Feasible! Why? Carry-lookahead adder

15 Fast Carry, First Level of Abstraction: Propagate and Generate c i+1 = (b i.c i ) + (a i.c i ) + (a i.b i ) = (a i b i ) +(a i +b i )c i = g i + p i c i, where g i = a i b i, and p i = a i + b i. c 1 = g 0 + p 0 c 0 c 2 = g 1 + p 1 c 1 = g 1 + p 1 g 0 + p 1 p 0 c 0 c 3 = g 2 + p 2 c 2 = g 2 + p 2 g 1 + p 2 p 1 g 0 + p 2 p 1 p 0 c 0 c 4 = g 3 + p 3 c 3 = g 3 + p 3 g 2 + p 3 p 2 g 1 + p 3 p 2 p 1 g 0 + p 3 p 2 p 1 p 0 c 0 c i+1 is 1 if an earlier adder generates a carry and all intermediate adders propagate the carry. Plumbing analogy for First Level

16 Fast Carry, Second Level of Abstraction: Carry look ahead Super Propagate P0 = p 3.p 2 p 1.p 0 P1 = p 7.p 6 p 5.p 4 P2 = p 11.p 10 p 9.p 8 P3 = p 15.p 14 p 13.p 12 Super Generate G0 = g 3 + p 3 g 2 + p 3 p 2 g 1 + p 3 p 2 p 1 g 0 G1 = g 7 + p 7 g 6 + p 7 p 6 g 5 + p 7 p 6 p 5 g 4 G2 = g 11 + p 11 g 10 + p 11 p 10 g 9 + p 11 p 10 p 9 g 8 G3 = g 15 + p 15 g 14 + p 15 p 14 g 13 + p 15 p 14 p 13 g 12 Generate final carry C1 = G0 + P0c 0 C2 = G1 + P1G0 + P1P0c 0 C3 = G2 + P2G1 + P2P1G0 + P2P1P0c 0 C4 = G3 + P3G2 + P3P2G1 + P3P2P1G0 + P3P2P1P0c 0 = c 16 Plumbing analogy for Next Level Carry Look ahead signals P0, G0

17 Use principle to build bigger adders The 16 bit adder with carry look ahead

18 More complicated than addition –accomplished via shifting and addition More time and more area Let's look at 3 versions based on grade school algorithm 0010 (multiplicand) x_1011 (multiplier) Negative numbers: convert and multiply –there are better techniques, we won’t look at them Multiplication

19 Multiplication: Implementation: 1 st Version Multiplier = 32 bits, Multiplicand = 64 bits Product = 64 bits. Initially Product is 64 bit Zero

20 Multiplication: Implementation: 1 st Version Problem with 1 st Version: 64 bit arithmetic, Half of multiplicand = 0!

21 Multiplication: Implementation: 2 nd Version Multiplier = 32 bits, Multiplicand = 32 bits Product = 64 bits. Product is 64 bit Zero

22 Multiplication: Implementation: 2 nd Version Problem with 2 nd Version: Half of product is always = 0!

23 Multiplication: Implementation: Final Version Multiplier = 32 bits, Multiplicand = 32 bits Product = 64 bits. Initially add Multiplier to Right half of Product i.e LeftProduct = 0, rightProduct = Multiplier

24 Multiplication: Implementation: Final Version

25 Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers, e.g.,  10 9 Representation: –sign, exponent, significand: (–1) sign  significand  2 exponent –more bits for significand gives more accuracy –more bits for exponent increases range IEEE 754 floating point standard: –single precision [1 word]: 8 bit exponent, 23 bit significand –double precision [2 words] : 11 bit exponent, 52 bit significand

26 IEEE 754 floating-point standard Leading “1” bit of significand is implicit Exponent is “biased” to make sorting easier –all 0s is smallest exponent all 1s is largest –bias of 127 for single precision and 1023 for double precision –summary: (–1) sign  significand)  2 exponent – bias Example 1: –decimal: -.75 = -3/4 = -3/2 2 –binary: -.11 = -1.1 x 2 -1 –floating point: exponent = bias + -1 = = 126 = –IEEE single precision: Example 2: –decimal: –binary: (can be more accurate) = x 2 -2 –floating point: exponent = 127 – 2 = 125 = –IEEE single precision:

27 Floating Point Complexities Operations are somewhat more complicated (see text) In addition to overflow we can have “underflow” Accuracy can be a big problem –IEEE 754 keeps two extra bits, guard and round –four rounding modes –positive divided by zero yields “infinity” –zero divide by zero yields “not a number” –other complexities Implementing the standard can be tricky Not using the standard can be even worse –see text for description of 80x86 and Pentium bug!

28 Chapter Four Summary Numbers can be represented as bits. Computer arithmetic is constrained by limited precision Bit patterns have no inherent meaning but standards do exist –two’s complement –IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation). Arithmetic Algorithms have been presented. Logical representation of hardware to do some arithmetic operations have been given. We are ready to move on (and implement the datapath = ALU, registers).