Chapter 2 Bits, Data Types & Operations Integer Representation

Slides:



Advertisements
Similar presentations
Fabián E. Bustamante, Spring 2007 Floating point Today IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.
Advertisements

Computer Engineering FloatingPoint page 1 Floating Point Number system corresponding to the decimal notation 1,837 * 10 significand exponent A great number.
– 1 – CS213, S’06 Floating Point 4/5/2006 Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties CS213.
Floating Point Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties CS 367.
Carnegie Mellon Instructors: Dave O’Hallaron, Greg Ganger, and Greg Kesden Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 6,
1 Binghamton University Floating Point CS220 Computer Systems II 3rd Lecture.
Float’s and reals CENG331: Introduction to Computer Systems 4 th Lecture Instructor: Erol Sahin Acknowledgement: Most of the slides are adapted from the.
Carnegie Mellon Instructors: Randy Bryant & Dave O’Hallaron Floating Point : Introduction to Computer Systems 3 rd Lecture, Aug. 31, 2010.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Floating Point Numbers
University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point standard: Definition Example and properties.
Chapter 4: The Building Blocks: Binary Numbers, Boolean Logic, and Gates Invitation to Computer Science, C++ Version, Third Edition.
1 Lecture 3 Bit Operations Floating Point – 32 bits or 64 bits 1.
FLOATING POINT COMPUTER ARCHITECTURE AND ORGANIZATION.
Introduction to Computing Systems from bits & gates to C & beyond Chapter 2 Bits, Data Types & Operations Integer Representation Floating-point Representation.
Chapter 5 Data representation.
Simple Data Type Representation and conversion of numbers
Computers Organization & Assembly Language
Computer Arithmetic Nizamettin AYDIN
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
CEN 316 Computer Organization and Design Computer Arithmetic Floating Point Dr. Mansour AL Zuair.
IT253: Computer Organization
Number Systems So far we have studied the following integer number systems in computer Unsigned numbers Sign/magnitude numbers Two’s complement numbers.
Computing Systems Basic arithmetic for computers.
Data Representation and Computer Arithmetic
Data Representation - Part II. Characters A variable may not be a non-numerical type Character is the most common non- numerical type in a programming.
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Lecture 5. Topics Sec 1.4 Representing Information as Bit Patterns Representing Text Representing Text Representing Numeric Values Representing Numeric.
Carnegie Mellon Instructors: Greg Ganger, Greg Kesden, and Dave O’Hallaron Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 4,
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Carnegie Mellon Instructor: San Skulrattanakulchai Floating Point MCS-284:
HW 3 Due 3/9 (Mon) 11:00 AM Probs. 2.81, 2.87, 2.88.
CSC 221 Computer Organization and Assembly Language
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
Data Representation.
Carnegie Mellon Instructors: Randy Bryant, Dave O’Hallaron, and Greg Kesden Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 5,
Floating Point Sept 9, 2004 Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties class04.ppt “The course.
10/7/2004Comp 120 Fall October 7 Read 5.1 through 5.3 Register! Questions? Chapter 4 – Floating Point.
1 University of Washington Fractional binary numbers What is ?
1 Floating Point. 2 Topics Fractional Binary Numbers IEEE 754 Standard Rounding Mode FP Operations Floating Point in C Suggested Reading: 2.4.
Floating Point Carnegie Mellon /18-243: Introduction to Computer Systems 4 th Lecture, 26 May 2011 Instructors: Gregory Kesden.
Lecture 4 IEEE 754 Floating Point Topics IEEE 754 standard Tiny float January 25, 2016 CSCE 212 Computer Architecture.
Floating Point Topics IEEE Floating-Point Standard Rounding Floating-Point Operations Mathematical Properties CS 105 “Tour of the Black Holes of Computing!”
Floating Point Representations
Floating Point CSE 238/2038/2138: Systems Programming
Floating Point Borrowed from the Carnegie Mellon (Thanks, guys!)
Floating Point Numbers
CS 105 “Tour of the Black Holes of Computing!”
Topics IEEE Floating Point Standard Rounding Floating Point Operations
CS 105 “Tour of the Black Holes of Computing!”
CS 367 Floating Point Topics (Ch 2.4) IEEE Floating Point Standard
Data Structures Mohammed Thajeel To the second year students
Floating Point.
Topic 3d Representation of Real Numbers
Chapter 2 Bits, Data Types & Operations Integer Representation
Data Representation Data Types Complements Fixed Point Representation
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Arithmetic August 31, 2009
Bits and Bytes Topics Representing information as bits
CS 105 “Tour of the Black Holes of Computing!”
Comp Org & Assembly Lang
Computer Organization COMP 210
CS 105 “Tour of the Black Holes of Computing!”
CS213 Floating Point Topics IEEE Floating Point Standard Rounding
Topic 3d Representation of Real Numbers
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Presentation transcript:

Chapter 2 Bits, Data Types & Operations Integer Representation Floating-point Representation Other data types

CS Reality #1 You’ve got to understand binary encodings. Examples Is x2 ≥ 0? Int’s: 40000 * 40000 --> 1600000000 50000 * 50000 --> ?? Float’s: Yes! Is (x + y) + z = x + (y + z)? Unsigned & Signed Int’s: Yes! Float’s: (1e20 + -1e20) + 3.14 --> 3.14 1e20 + (-1e20 + 3.14) --> 0? 3? 3.1? Is 4/5 = 4/5.0? Maybe? (int) 4/5 = 0 what is 4/5.0?

Data types Our first requirement is to find a way to represent information (data) in a form that is mutually comprehensible by human and machine. Ultimately, we will have to develop schemes for representing all conceivable types of information - language, images, actions, etc. Specifically, the devices that make up a computer are switches that can be on or off, i.e. at high or low voltage. Thus they naturally provide us with two symbols to work with: we can call them on & off, or (more usefully) 0 and 1. We will start by examining different ways of representing Strings of Characters Integers Floating point numbers

Why do Computers use Base 2? Base 10 Number Representation Natural representation for human transactions Binary floating point number cannot exactly represent $1.20 Hard to Implement Electronically Hard to store ENIAC (First electronic computer) used 10 vacuum tubes / digit Hard to transmit Need high precision to encode 10 signal levels on single wire Messy to implement digital logic functions Addition, multiplication, etc. Base 2 Number Representation Easy to store with bistable elements Reliably transmitted on noisy and inaccurate wires Examples Represent 1521310 as 111011011011012 Represent 1.2010 as 1.0011001100110011[0011]…2 Represent 1.5213 X 104 as 1.11011011011012 X 213 1.20 = 1×20 + 0×2-1 + 0×2-2 +…

Unsigned Binary Integers Y = “abc” = a.22 + b.21 + c.20 (where the digits a, b, c can each take on the values of 0 or 1 only) 3-bits 5-bits 8-bits 000 00000 00000000 1 001 00001 00000001 2 010 00010 00000010 3 011 00011 00000011 4 100 00100 00000100 N = number of bits Range is: 0  i < 2N – 1 Umin = 0 Umax = 2N – 1 Problem: How do we represent negative numbers?

Range is: -2N-1 + 1 < i < 2N-1 – 1 Signed Magnitude -4 10100 -3 10011 -2 10010 -1 10001 -0 10000 +0 00000 +1 00001 +2 00010 +3 00011 +4 00100 Leading bit is the sign bit Y = “abc” = (-1)a (b.21 + c.20) Range is: -2N-1 + 1 < i < 2N-1 – 1 Smin = -2N-1 + 1 Smax = 2N-1 – 1 Problems: How do we do addition/subtraction? We have two numbers for zero (+/-)!

Range is: -2N-1 < i < 2N-1 – 1 Two’s Complement -16 10000 … -3 11101 -2 11110 -1 11111 00000 +1 00001 +2 00010 +3 00011 +15 01111 Transformation To transform a into -a, invert all bits in a and add 1 to the result Range is: -2N-1 < i < 2N-1 – 1 Tmin = -2N-1 Tmax = 2N-1 – 1 Advantages: Operations need not check the sign Only one representation for zero Efficient use of all the bits in other words the result of subtracting the number from 2N

Manipulating Binary numbers - 1 Binary to Decimal conversion & vice-versa A 4 bit binary number A = a3a2a1a0 corresponds to: a3 x 23 + a2 x 22 + a1 x 21 + a0 x 20 = a3 x 8 + a2 x 4 + a1 x 2 + a0 x 1 (where ai = 0 or 1 only) A decimal number can be broken down by iteratively determining the highest power of two that “fits” in the number: e.g. (13)10 => e.g. (63)10 => e.g. (0.63)10 => In the 2’s complement representation, leading zeros do not affect the value of a positive binary number, and leading ones do not affect the value of a negative number. So: 01101 = 00001101 = 13 and 11011 = 11111011 = -5 00001101 11111011 00001000 => 8 (as expected!) Two's complement is a mathematical operation on binary numbers, as well as a representation of signed binary numbers based on this operation. The two's complement of an N-bit number is defined as the complement with respect to 2N, in other words the result of subtracting the number from 2N. This is also equivalent to taking the ones' complement and then adding one, since the sum of a number and its ones' complement is all 1 bits. The two's complement of a number behaves like the negative of the original number in most arithmetic, and positive and negative numbers can coexist in a natural way. In two's-complement representation, negative numbers are represented by the two's complement of their absolute value;[1] in general, negation (reversing the sign) is performed by taking the two's complement. This system is the most common method of representing signed integers on computers.[2] An N-bit two's-complement numeral system can represent every integer in the range −(2N−1) to +(2N−1 − 1) while ones' complement can only represent integers in the range −(2N−1 − 1) to +(2N−1 − 1). The two's-complement system has the advantage that the fundamental arithmetical operations of addition, subtraction andmultiplication are identical, regardless of whether the inputs and outputs are interpreted as unsigned binary numbers or two's complement (provided that overflow is ignored). This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic. Also, zero has only a single representation, obviating the subtleties associated with negative zero, which exists in ones'-complement systems.

Manipulating Binary numbers - 10 Overflow If we add the two (2’s complement) 4 bit numbers representing 7 and 5 we get : 0111 => +7 0101 => +5 1100 => -4 (in 4 bit 2’s comp.) We get -4, not +12 as we would expect !! We have overflowed the range of 4 bit 2’s comp. (-8 to +7), so the result is invalid. Note that if we add 16 to this result we get back 16 - 4 = 12 this is like “stepping up” to 5 bit 2’s complement representation In general, if the sum of two positive numbers produces a negative result, or vice versa, an overflow has occurred, and the result is invalid in that representation.

Speeding Things up With Hex Conversion from binary to/from Hex is much faster A 32-bit register might contain the value: 01010011101010011011101100110111 It is much faster to view it like this: x53A9BB37 What if we want to add: 01010011101010011011101100110111 + 00111100001011111010011110011100 We can do it this way: x53A9BB37 + x3C2FA79C ----------- x8FD962D3

Representing Strings Strings in C Represented by array of characters Each character encoded in ASCII format Standard 7-bit encoding of character set Other encodings exist, but uncommon UNICODE 16-bit superset of ASCII that includes characters for non-English alphabets Character “0” has code 0x30 Digit i has code 0x30+i String should be null-terminated Final character = 0 Line termination, and other special characters vary char S[6] = "15213"; #50 #49 #53 #51 #00 x32 x31 x35 x33 x00 Dec Hex Strings are really just arrays of numbers! How can we represent numbers using bits? “Hey, how do I know it’s a string in the first place?”

Interpreting data If some memory contains x4869206D6F6D2100, is it: Two signed or unsigned integers: x4869206D = 1,214,849,133 and x6F6D2100 = ?? A 64-bit floating point number? A string: “Hi mom!” ? There is no way to know, without Seeing the source code, Talking to the programmer, or Reverse engineering Remember this: Memory and registers contain BINARY, and only binary, at all times.

C Integer Puzzles Assume machine with 32 bit word size, two’s complement integers For each of the following C expressions, either: Argue that is true for all argument values Give example where not true x * x >= 0 ux >= 0 ux > -1 x < 0  (2*x) < 0 x > y  -x < -y x > 0 && y > 0  x + y > 0 x >= 0  -x <= 0 x <= 0  -x >= 0 Initialization int x = foo(); int y = bar(); unsigned ux = x; unsigned uy = y;

Problems Patt & Patel, Chapter 2: 2.4, 2.8, 2.10, 2.11, 2.17, 2.21

Real numbers Most numbers are not integer! Range: Precision: The magnitude of the numbers we can represent is determined by how many bits we use: e.g. with 32 bits the largest number we can represent is about +/- 2 billion, far too small for many purposes. Precision: The exactness with which we can specify a number: e.g. a 32 bit number gives us 31 bits of precision, or roughly 9 figure precision in decimal representation Our decimal system handles non-integer real numbers by adding yet another symbol - the decimal point (.) to make a fixed point notation: e.g. 3,456.78 = 3.103 + 5.102 + 4.101 + 6.100 + 7.10-1 + 8.10-2 The floating point, or scientific, notation allows us to represent very large and very small numbers (integer or real), with as much or as little precision as needed.

Real numbers in a fixed space What if you only have 8 digits to represent a real number? Do you prefer… ... a low-precision number with a large range? … or a high-precision number with a smaller range? Wouldn’t it be nice if we could “float” the decimal point to allow for either case?

Scientific notation In binary, we only have 0 and 1, how can we represent the decimal point? In scientific notation, it’s implicit. In other words: 425000 (× 100) = 425 × 103 = 42.5 × 104 = 4.25 × 105 We can represent any number with the decimal after the first digit by “floating” the decimal point to the left (or right) 0.00125 =

Real numbers in binary We mimic the decimal floating point notation to create a “hybrid” binary floating point number: We first use a “binary point” to separate whole numbers from fractional numbers to make a fixed point notation: e.g. 25.7510 = 1.24 + 1.103 + 1.101 + 1.2-1 + 1.2-2 = 11001.1102 (2-1 = 0.5 and 2-2 = 0.25, etc.) We then “float” the binary point: 00011001.110 => 1.1001110 x 24 mantissa = 1.1001110, exponent = 4 Now we have to express this without the extra symbols ( x, 2, . ) by convention, we divide the available bits into three fields: sign, mantissa, exponent

IEEE-754 fp numbers - 1 N = (-1)s x 1.fraction x 2(biased exp. – 127) 8 bits 23 bits 32 bits: N = (-1)s x 1.fraction x 2(biased exp. – 127) Sign: 1 bit Mantissa: 23 bits We “normalize” the mantissa by dropping the leading 1 and recording only its fractional part (why?) Exponent: 8 bits In order to handle both + and - exponents, we add 127 to the actual exponent to create a “biased exponent” (this provides a lexographic order!) 2-127 => biased exponent = 0000 0000 (= 0) 20 => biased exponent = 0111 1111 (= 127) 2+127 => biased exponent = 1111 1110 (= 254)

IEEE-754 fp numbers - 2 Example: sign bit = 0 (+) normalized mantissa (fraction) = 100 1110 0000 0000 0000 0000 biased exponent = 4 + 127 = 131 => 1000 0011 so 25.75 => 0 1000 0011 100 1110 0000 0000 0000 0000 => x41CE0000 Special values represented by convention: Infinity (+ and -): exponent = 255 (1111 1111) and mantissa = 0 NaN (not a number): exponent = 255 and mantissa  0 Zero (0): exponent = 0 and mantissa = 0 note: special case => fraction is de-normalized, i.e no hidden 1

IEEE-754 fp numbers - 3 N = (-1)s x 1.fraction x 2(biased exp. – 1023) Double precision (64 bit) floating point 64 bits: 1 11 bits 52 bits s biased exp. fraction N = (-1)s x 1.fraction x 2(biased exp. – 1023) Range & Precision: 32 bit: mantissa of 23 bits + 1 => approx. 7 digits decimal 2+/-127 => approx. 10+/-38 64 bit: mantissa of 52 bits + 1 => approx. 15 digits decimal 2+/-1023 => approx. 10+/-306

Floating point in C C Guarantees Two Levels Conversions float single precision double double precision Conversions Casting between int, float, and double changes numeric values Double or float to int Truncates fractional part Like rounding toward zero Not defined when out of range Generally saturates to TMin or TMax int to double Exact conversion, as long as int has ≤ 53 bit word size int to float Will round according to rounding mode

Floating point puzzles in C Assume neither d nor f is NAN int x = …; float f = …; double d = …; x == (int)(float) x x == (int)(double) x f == (float)(double) f d == (float) d f == -(-f); 2/3 == 2/3.0 d < 0.0 ((d*2) < 0.0) d > f -f > -d d * d >= 0.0 (d+f)-d == f

Practice Problems Patt & Patel 2.39, 2.40, 2.41, 2.42, 2.56

Other Data Types Other numeric data types Bit vectors & masks e.g. BCD Bit vectors & masks sometimes we want to deal with the individual bits themselves Logic values True (non-zero) or False (0) Instructions Output as “data” from a compiler Misc Grapics, sounds, …

Ariane 5 Danger! Computed horizontal velocity as floating point number Converted to 16-bit integer Worked OK for Ariane 4 Overflowed for Ariane 5 Used same software Result Exploded 37 seconds after liftoff Cargo worth $500 million