1  2004 Morgan Kaufmann Publishers Chapter Three.

Slides:



Advertisements
Similar presentations
Chapter Three.
Advertisements

1 Chapter Three Last revision: 4/17/ Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.
Chapter 3 Arithmetic for Computers. Exam 1 CSCE
©UCB CPSC 161 Lecture 6 Prof. L.N. Bhuyan
1 CONSTRUCTING AN ARITHMETIC LOGIC UNIT CHAPTER 4: PART II.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
Arithmetic CPSC 321 Computer Architecture Andreas Klappenecker.
1 Chapter 4: Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture Assembly Language and.
Chapter Four Arithmetic and Logic Unit
Arithmetic I CPSC 321 Andreas Klappenecker. Administrative Issues Office hours of TA Praveen Bhojwani: M 1:00pm-3:00pm.
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
1  2004 Morgan Kaufmann Publishers Lectures for 3rd Edition Note: these lectures are often supplemented with other materials and also problems from the.
1 ECE369 Chapter 3. 2 ECE369 Multiplication More complicated than addition –Accomplished via shifting and addition More time and more area.
1  1998 Morgan Kaufmann Publishers Chapter Four Arithmetic for Computers.
Chapter 3 Arithmetic for Computers. Arithmetic Where we've been: Abstractions: Instruction Set Architecture Assembly Language and Machine Language What's.
CS/COE0447 Computer Organization & Assembly Language
Arithmetic for Computers
1 Bits are just bits (no inherent meaning) — conventions define relationship between bits and numbers Binary numbers (base 2)
1 CS/COE0447 Computer Organization & Assembly Language Chapter 3.
1 Modified from  Modified from 1998 Morgan Kaufmann Publishers Chapter Three: Arithmetic for Computers citation and following credit line is included:
Computer Arithmetic.
컴퓨터 구조 : P & H 3 장 : 컴퓨터 연산 컴퓨터 구조 : P & H Bits are just bits (no inherent meaning) — conventions define relationship between bits and.
1  2004 Morgan Kaufmann Publishers Instructions: bne $t4,$t5,Label Next instruction is at Label if $t4≠$t5 beq $t4,$t5,Label Next instruction is at Label.
EGRE 426 Fall 09 Chapter Three
CS/COE0447 Computer Organization & Assembly Language
Computing Systems Basic arithmetic for computers.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
1 EGRE 426 Fall 08 Chapter Three. 2 Arithmetic What's up ahead: –Implementing the Architecture 32 operation result a b ALU.
1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.
Csci 136 Computer Architecture II – Constructing An Arithmetic Logic Unit Xiuzhen Cheng
Lecture 9: Floating Point
Computer Architecture Chapter 3 Instructions: Arithmetic for Computer Yu-Lun Kuo 郭育倫 Department of Computer Science and Information Engineering Tunghai.
Conversion to Larger Number of Bits Ex: Immediate Field (signed 16 bit) to 32 bit Positive numbers have implied 0’s to the left. So, put 16 bit number.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Lecture notes Reading: Section 3.4, 3.5, 3.6 Multiplication
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Chapter 3 Arithmetic for Computers. Chapter 3 — Arithmetic for Computers — 2 Arithmetic for Computers Operations on integers Addition and subtraction.
1 ELEN 033 Lecture 4 Chapter 4 of Text (COD2E) Chapters 3 and 4 of Goodman and Miller book.
Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic.
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
순천향대학교 정보기술공학부 이 상 정 1 3. Arithmetic for Computers.
CS 232: Computer Architecture II Prof. Laxmikant (Sanjay) Kale Floating point arithmetic.
Prof. Hsien-Hsin Sean Lee
Lec 11Systems Architecture1 Systems Architecture Lecture 11: Arithmetic for Computers Jeremy R. Johnson Anatole D. Ruslanov William M. Mongan Some or all.
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
9/23/2004Comp 120 Fall September Chapter 4 – Arithmetic and its implementation Assignments 5,6 and 7 posted to the class web page.
May 2, 2001System Architecture I1 Systems Architecture I (CS ) Lecture 11: Arithmetic for Computers * Jeremy R. Johnson May 2, 2001 *This lecture.
By Wannarat Computer System Design Lecture 3 Wannarat Suntiamorntut.
1 CPTR 220 Computer Organization Computer Architecture Assembly Programming.
Based on slides from D. Patterson and www-inst.eecs.berkeley.edu/~cs152/ COM 249 – Computer Organization and Assembly Language Chapter 3 Arithmetic For.
1 Chapter 3 Arithmetic for Computers Lecture Slides are from Prof. Jose Delgado-Frias, Mr. Paul Wettin, and Prof. Valeriu Beiu (Washington State University.
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
Computer System Design Lecture 3
Chapter Three : Arithmetic for Computers
CS 232: Computer Architecture II
CS/COE0447 Computer Organization & Assembly Language
Morgan Kaufmann Publishers
Arithmetic Logical Unit
How to represent real numbers
Computer Arithmetic Multiplication, Floating Point
CS/COE0447 Computer Organization & Assembly Language
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
CS/COE0447 Computer Organization & Assembly Language
Chapter 3 Arithmetic for Computers
Morgan Kaufmann Publishers Arithmetic for Computers
COMS 361 Computer Organization
Presentation transcript:

1  2004 Morgan Kaufmann Publishers Chapter Three

2  2004 Morgan Kaufmann Publishers Bits are just bits (no inherent meaning) — conventions define relationship between bits and numbers Binary numbers (base 2) decimal: n -1 Of course it gets more complicated: numbers representable are finite (overflow) fractions and real numbers negative numbers e.g., no MIPS subi instruction; addi can add a negative number How do we represent negative numbers? i.e., which bit patterns will represent which numbers? Numbers

3  2004 Morgan Kaufmann Publishers Sign Magnitude: One's Complement Two's Complement 000 = = = = = = = = = = = = = = = = = = = = = = = = -1 Issues: balance, number of zeros, ease of operations Which one is best? Why? Possible Representations

4  2004 Morgan Kaufmann Publishers 32 bit signed numbers: two = 0 ten two = + 1 ten two = + 2 ten two = + 2,147,483,646 ten two = + 2,147,483,647 ten two = – 2,147,483,648 ten two = – 2,147,483,647 ten two = – 2,147,483,646 ten two = – 3 ten two = – 2 ten two = – 1 ten maxint minint MIPS

5  2004 Morgan Kaufmann Publishers Negating a two's complement number: invert all bits and add 1 –remember: “negate” and “invert” are quite different! Converting n bit numbers into numbers with more than n bits: –MIPS 16 bit immediate gets converted to 32 bits for arithmetic –copy the most significant bit (the sign bit) into the other bits > > –"sign extension" (lbu vs. lb) Two's Complement Operations

6  2004 Morgan Kaufmann Publishers Just like in grade school (carry/borrow 1s) Two's complement operations easy –subtraction using addition of negative numbers Overflow (result too large for finite computer word): –e.g., adding two n-bit numbers does not yield an n-bit number note that overflow term is somewhat misleading, 1000 it does not mean a carry “overflowed” Addition & Subtraction

7  2004 Morgan Kaufmann Publishers No overflow when adding a positive and a negative number No overflow when signs are the same for subtraction Overflow occurs when the value affects the sign: –overflow when adding two positives yields a negative –or, adding two negatives gives a positive –or, subtract a negative from a positive and get a negative –or, subtract a positive from a negative and get a positive Consider the operations A + B, and A – B –Can overflow occur if B is 0 ? –Can overflow occur if A is 0 ? Detecting Overflow

8  2004 Morgan Kaufmann Publishers An exception (interrupt) occurs –Control jumps to predefined address for exception –Interrupted address is saved for possible resumption Details based on software system / language –example: flight control vs. homework assignment Don't always want to detect overflow — new MIPS instructions: addu, addiu, subu note: addiu still sign-extends! note: sltu, sltiu for unsigned comparisons Effects of Overflow

9  2004 Morgan Kaufmann Publishers More complicated than addition –accomplished via shifting and addition More time and more area Let's look at 3 versions based on a gradeschool algorithm 1010 (multiplicand) __x_1001 (multiplier) Negative numbers: convert and multiply –there are better techniques, we won’t look at them Multiplication

10  2004 Morgan Kaufmann Publishers Multiplication: Implementation Datapath Control

11  2004 Morgan Kaufmann Publishers Final Version What goes here? Multiplier starts in right half of product

12  2004 Morgan Kaufmann Publishers Division Dividend = Quotient x Divisor + Remainder

13  2004 Morgan Kaufmann Publishers Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers, e.g.,  10 9 Representation: –sign, exponent, significand: (–1) sign  significand  2 exponent –more bits for significand gives more accuracy –more bits for exponent increases range IEEE 754 floating point standard: –single precision: 8 bit exponent, 23 bit significand –double precision: 11 bit exponent, 52 bit significand

14  2004 Morgan Kaufmann Publishers IEEE 754 floating-point standard Leading “1” bit of significand is implicit Exponent is “biased” to make sorting easier –all 0s is smallest exponent all 1s is largest –bias of 127 for single precision and 1023 for double precision –summary: (–1) sign  significand)  2 exponent – bias Example: –decimal: -.75 = - ( ½ + ¼ ) –binary: -.11 = -1.1 x 2 -1 –floating point: exponent = 126 = –IEEE single precision:

15  2004 Morgan Kaufmann Publishers Floating point addition

16  2004 Morgan Kaufmann Publishers Floating point multiplication

17  2004 Morgan Kaufmann Publishers Floating Point Complexities Operations are somewhat more complicated (see text) In addition to overflow we can have “underflow” Accuracy can be a big problem –IEEE 754 keeps two extra bits, guard and round –four rounding modes –positive divided by zero yields “infinity” –zero divide by zero yields “not a number” –other complexities Implementing the standard can be tricky Not using the standard can be even worse –see text for description of 80x86 and Pentium bug!

18  2004 Morgan Kaufmann Publishers Chapter Three Summary Computer arithmetic is constrained by limited precision Bit patterns have no inherent meaning but standards do exist –two’s complement –IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns Performance and accuracy are important so there are many complexities in real machines Algorithm choice is important and may lead to hardware optimizations for both space and time (e.g., multiplication) You may want to look back (Section 3.10 is great reading!)