CS/COE 0447 Jarrett Billingsley

Slides:



Advertisements
Similar presentations
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Advertisements

Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
CS 447 – Computer Architecture Lecture 3 Computer Arithmetic (2)
Computer ArchitectureFall 2007 © September 5, 2007 Karem Sakallah CS 447 – Computer Architecture.
1 Lecture 4: Arithmetic for Computers (Part 5) CS 447 Jason Bakos.
Computer ArchitectureFall 2008 © August 27, CS 447 – Computer Architecture Lecture 4 Computer Arithmetic (2)
Binary Arithmetic Math For Computers.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
Computer Arithmetic.
IT253: Computer Organization
Computing Systems Basic arithmetic for computers.
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
CH09 Computer Arithmetic  CPU combines of ALU and Control Unit, this chapter discusses ALU The Arithmetic and Logic Unit (ALU) Number Systems Integer.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
CSC 221 Computer Organization and Assembly Language
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Computer Arithmetic See Stallings Chapter 9 Sep 10, 2009
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CE-321: Computer.
CS 232: Computer Architecture II Prof. Laxmikant (Sanjay) Kale Floating point arithmetic.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
Chapter 9 Computer Arithmetic
William Stallings Computer Organization and Architecture 8th Edition
Floating Point Numbers
Floating Point Representations
Backgrounder: Binary Math
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
Dr. Clincy Professor of CS
Dr. Clincy Professor of CS
Number Systems and Binary Arithmetic
Integer Division.
Floating Point Math & Representation
Lecture 9: Floating Point
Dr. Clincy Professor of CS
CS 232: Computer Architecture II
CS/COE0447 Computer Organization & Assembly Language
COMPUTING FUNDAMENTALS
Morgan Kaufmann Publishers
CS/COE 0447 (term 2181) Jarrett Billingsley
William Stallings Computer Organization and Architecture 7th Edition
Chapter 6 Floating Point
Topic 3d Representation of Real Numbers
Number Representations
How to represent real numbers
How to represent real numbers
Dr. Clincy Professor of CS
ECEG-3202 Computer Architecture and Organization
Computer Architecture
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
Chapter 8 Computer Arithmetic
Floating Point Numbers
Topic 3d Representation of Real Numbers
Floating Point Numbers
Computer Organization and Assembly Language
Number Representations
Presentation transcript:

CS/COE 0447 Jarrett Billingsley Division and Floats CS/COE 0447 Jarrett Billingsley

Class announcements how was the project :^) last day of new material exam review Wednesday, exam next TUESDAY cause of the 3-day weekend CS447

A few more odds and ends CS447

Doing multiplication faster let's look at how we're doing it now. 77 × but we could go left-to-right and get the same answer. 5 4 but what do you know about the order of addition? our algorithm does this: 308 (7×4 + (70×4 + (7×50 + (70×50)))) 3850 addition is commutative. 4158 (7×4 + 70×4) + (7×50 + 70×50) what if we had ten people working on the same 8-digit by 8-digit multiplication? - we could assign one digit of the multiplier to each person to get 8 partial products - then each pair of people could add their results to get 4 intermediates - and then have 2 pairs of those add their results to get 2 intermediates - and then add those 2 intermediates to get a final result. CS447

Divide-and-conquer… in parallel an n×n digit multiplication can be broken into n, n×1 digit ones the partial products can be summed in any order (thanks, commutativity) 1011×0101 = all operations in the same column can be done in parallel. 1011×1 + now our multiplication takes only 3 steps instead of 4. + 1011×0 + +1011×100 but this is a O(log(n)) algorithm! so for 32 bits… + + 1011×0 - the first column – calculating the partial products – can be done extremely quickly - essentially it's an "AND <all 0s>" or "AND <all 1s>" followed by a left-shift it takes 6 steps instead of 32! CS447

But division… if we try to do something similar, well… what's the difference between addition and subtraction? 1011÷101 = subtraction is not commutative. 1011÷101000 = 0 you can do the steps in any order… but you can't do them at the same time. 1011÷10100 = 0 1011÷1010 = 1 R 1 1011÷101 = 1 R 110?? - the right answer is 10 R 1 (11 ÷ 5 = 2 R 1) - you CAN, believe it or not, divide right-to-left, and you WILL get the same answer! - if you do, say, 999÷4 = 249R3… - starting from the left, this is easy. - but starting from the right, you will end up with 903÷4, which is just as hard as the original division! we cannot know the answer to this step… …until we know the answer to the previous one. CS447

Division is fundamentally slower each step depends on the previous one. we cannot split it up into subproblems like with multiplication. the only way to make division faster is to guess. SRT Division is a way of predicting quotient bits based on the next few bits of the dividend and divisor but it can make mistakes and they have to be corrected the original Pentium CPU in 1994 messed this up and Intel pretended everything was OK and people got mad and they had to recall millions of them and Intel lost half a billion dollars lol CS447

Arithmetic right shift shifting right is like dividing by powers of 2, right? a >> n = a ÷ 2n if we shift this right… Unsigned Signed 1 0 1 1 0 0 0 0 = 176 = -80 0 1 0 1 1 0 0 0 = 88 = 88 well that's a little unfortunate. 0 0 1 0 1 1 0 0 = 44 = 44 Arithmetic Right Shift is used for signed numbers: it "smears" the sign bit into the top bits. 1 0 1 1 0 0 0 0 = -80 1 1 0 1 1 0 0 0 = -40 1 1 1 0 1 1 0 0 = -20 - in C/C++, signed or unsigned shift is determined by the signedness of the operands - which leads to awkward situations when you have mixed signedness (signed "wins") - "Arithmetic Left Shift" exists too – but it is identical to regular left shift, so we don't really need it. Java uses >>>, MIPS uses sra (Shift Right Arithmetic) CS447

well that's a little weird. It's division-esque… let's look at the values we get as we divide and shift. n Binary Decimal a÷2n 10110000 -80 -80 well that's a little weird. 1 11011000 -40 -40 2 11101100 -20 -20 you can never get zero by using arithmetic right shift on a negative number. 3 11110110 -10 -10 4 11111011 -5 -5 5 11111101 -3 -2 6 11111110 -2 -1 - we'll see why in a minute. - AAAAAAAAACTUALLY that division column is one possible answer for division… 7 11111111 -1 CS447

Doing modulo with AND in decimal, dividing by powers of 10 is trivial. 53884 ÷ 1000 = 53 R 884 53884 ÷ 1000 = 53 R 884 in binary, we can divide by powers of 2 easily with shifting and we can get the remainder by masking! 10010110 ÷ 1000 = 10010 R 110 10010110 ÷ 1000 = 10010 R 110 10010110 >> 11 = 10010 10010110 & 0111 = 110 more generally: a AND (2n-1) = a % 2n CS447

Signed division CS447

All roads lead to Rome… er, the Dividend how did we extend our multiplication algorithm to signed numbers? but how exactly do the rules work when you have two results? the four values are related as: Dividend = (Divisor × Quotient) + Remainder If you do… Java says… Python says… 7 / 2 3 R 1 7 / -2 -3 R 1 -4 R -1 -7 / 2 -3 R -1 -4 R 1 -7 / -2 3 R -1 mathematicians would expect the remainder to always be positive, so the last row would be 4 R 1! - we did signed multiplication as abs(A) × abs(B) and dealt with the sign separately. - all these answers are correct, if you go by the relation formula. check out https://en.wikipedia.org/wiki/Modulo_operation for this travesty CS447

Whaaaaaaaaaaaaaaaat no, really, it's not well-defined. there's no "right" answer. watch out for this. I think I never really ran into it because most of the time, when you're dealing with modulo, you're dealing with positive values and most languages I had used did (-7 / 2) as -(7 / 2) this is "truncated division" (rounds towards 0) and then I taught that??? for two years?? but then I tried it in Python and was totally confused it uses "flooring division" (rounds towards -∞) so which does arithmetic right shift do? it does flooring division. CS447

Floating-point number representation CS447

This could be a whole unit itself... floating-point arithmetic is COMPLEX STUFF but it's not super useful to know unless you're either: doing lots of high-precision numerical programming, or implementing floating-point arithmetic yourself. however... it's good to have an understanding of why limitations exist it's good to have an appreciation of how complex this is... and how much better things are now than they were in the 1970s and 1980s CS447

this is called fixed-point representation Fixing the point if we want to represent decimal places, one way of doing so is by assuming that the lowest n digits are the decimal places. $12.34 1234 this is called fixed-point representation +$10.81 +1081 $23.15 2315 CS447

A rising tide a 16.16 fixed-point number looks like this: 0011 0000 0101 1010.1000 0000 1111 1111 binary point the largest (signed) value we can represent is +32767.999 the smallest fraction we can represent is 1/65536 but if we let the binary point float around… 0011.0000 0101 1010 1000 0000 1111 1111 …we can get much higher accuracy near 0… 0011 0000 0101 1010 1000 0000.1111 1111 - the places after the binary point are 1/2s, 1/4ths, 1/8ths, etc. - with fixed point, the "split" between whole number and fraction is stuck in place - with floating point, that split is flexible - usually we need either accurate numbers near 0 or a large, but inaccurate number …and trade off accuracy for range further away from 0. CS447

IEEE 754 est'd 1985, updated as recently as 2008 standard for floating-point representation and arithmetic that virtually every CPU now uses floating-point representation is based around scientific notation 1,348 = -0.0039 = -1,440,000 = +1.348 × 10+3 -3.9 × 10-3 -1.44 × 10+6 sign exponent significand CS447

Binary Scientific Notation scientific notation works equally well in any other base! (below uses base-10 exponents for clarity) +1001 0101 = -0.001 010 = -1001 0000 0000 0000 = +1.001 0101 × 2+7 -1.010 × 2-3 -1.001 × 2+15 what do you notice about the digit before the binary point? - the digit before the binary point is ALWAYS 1! - …except for 0. CS447

IEEE 754 Single-precision known as float in C/C++/Java etc., 32-bit float format 1 bit for sign, 8 bits for the exponent, 23 bits for the fraction the fraction field only stores the digits after the binary point the 1 before the binary point is implicit! in effect this gives us a 24-bit significand the only number with a 0 before the binary point is 0! the significand of floating-point numbers is in sign-magnitude! do you remember the downside(s)? - two zeroes, and harder math. CS447 illustration from user Stannered on Wikimedia Commons

The exponent field -127 => -10 => 34 => 117 161 Signed Biased the exponent field is 8 bits, and can hold positive or negative exponents, but... it doesn't use S-M, 1's, or 2's complement. it uses something called biased notation. biased representation = signed number + bias constant single-precision floats use a bias constant of 127 -127 => -10 => 34 => 117 161 Signed Biased the exponent can range from -127 to 127 (0 to 254 biased) why'd they do this? so you can sort floats with integer comparisons?? CS447

Encoding an integer as a float you have an integer, like 2471 = 0000 1001 1010 01112 put it in scientific notation 1.001 1010 01112 × 2+11 get the exponent field by adding the bias constant 11 + 127 = 138 = 100010102 copy the bits after the binary point into the fraction field s exponent fraction 10001010 00110100111000000…000 start at the left side! positive - decoding is the opposite: put 1. before the fraction, and subtract 127 from the exponent field to get the signed exponent CS447

Other formats the most common other format is double-precision (C/C++/Java double), which uses an 11-bit exponent and 52-bit fraction GPUs have driven the creation of a half-precision 16-bit floating-point format. it's adorable CS447 both illustrations from user Codekaizen on Wikimedia Commons

Check out this cool thing in MARS go to Tools > Floating Point Representation it's an interactive thing! CS447