Binary Arithmetic.

Slides:



Advertisements
Similar presentations
Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved Floating-point Numbers.
Advertisements

Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
©Brooks/Cole, 2003 Chapter 3 Number Representation.
Chapter 5 Floating Point Numbers. Real Numbers l Floating point representation is used whenever the number to be represented is outside the range of integer.
1 Lecture 3 Bit Operations Floating Point – 32 bits or 64 bits 1.
Signed Numbers.
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
Floating Point Numbers
Binary Number Systems.
Binary Representation and Computer Arithmetic
The Binary Number System
Data Representation Number Systems.
Simple Data Type Representation and conversion of numbers
Binary Real Numbers. Introduction Computers must be able to represent real numbers (numbers w/ fractions) Two different ways:  Fixed-point  Floating-point.
Information Representation (Level ISA3) Floating point numbers.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Number Systems II Prepared by Dr P Marais (Modified by D Burford)
1 Lecture 5 Floating Point Numbers ITEC 1000 “Introduction to Information Technology”
IT253: Computer Organization
Number Systems So far we have studied the following integer number systems in computer Unsigned numbers Sign/magnitude numbers Two’s complement numbers.
Computing Systems Basic arithmetic for computers.
Data Representation and Computer Arithmetic
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Data Representation in Computer Systems
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
CH09 Computer Arithmetic  CPU combines of ALU and Control Unit, this chapter discusses ALU The Arithmetic and Logic Unit (ALU) Number Systems Integer.
9.4 FLOATING-POINT REPRESENTATION
©Brooks/Cole, 2003 Chapter 3 Number Representation.
Chapter 3 Number Representation. Convert a number from decimal to binary notation and vice versa. Understand the different representations of an integer.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
CSC 221 Computer Organization and Assembly Language
Floating Point Arithmetic
COMP201 Computer Systems Floating Point Numbers. Floating Point Numbers  Representations considered so far have a limited range dependent on the number.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Lecture notes Reading: Section 3.4, 3.5, 3.6 Multiplication
Number Systems & Operations
CS1Q Computer Systems Lecture 2 Simon Gay. Lecture 2CS1Q Computer Systems - Simon Gay2 Binary Numbers We’ll look at some details of the representation.
Numbers in Computers.
Fixed-point and floating-point numbers Ellen Spertus MCS 111 October 4, 2001.
Software Design and Development Storing Data Computing Science.
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
1 CE 454 Computer Architecture Lecture 4 Ahmed Ezzat The Digital Logic, Ch-3.1.
Floating Point Representations
Department of Computer Science Georgia State University
Fundamentals of Computer Science
Binary Numbers The arithmetic used by computers differs in some ways from that used by people. Computers perform operations on numbers with finite and.
Introduction To Computer Science
Number Systems and Binary Arithmetic
Data Structures Mohammed Thajeel To the second year students
Binary Numbers Material on Data Representation can be found in Chapter 2 of Computer Architecture (Nicholas Carter) CSC 370 (Blum)
Number Representations
How to represent real numbers
Number Representation
Floating Point Numbers
Floating Point Numbers
OBJECTIVES After reading this chapter, the reader should be able to :
Number Representations
Presentation transcript:

Binary Arithmetic

Binary Arithmetic We will see: How numbers are stored in a computer The consequences of using fixed-length arithmetic How to convert to and from binary How to represent negative numbers How to represent fractional numbers Fixed-point Floating-point

Why Binary Arithmetic? Early computers often used decimal arithmetic Binary arithmetic is the most robust Only need to distinguish between two different voltage levels A decimal computer would need to distinguish ten different levels Would need a much lower noise level to operate reliably – expensive But could pack far more into a given word length Another possibility is an analogue computer But the presence of noise means low accuracy

8 bits, 1 byte, 256 values 3 bits 8 combinations 4 bits 16 combinations Each bit we add increases the number of possible combinations by 2 With n bits, we have 2n combinations Hence 8 bits (1 byte) gives 28 = 256 different combinations

How many bits in a date? Using binary values means we must divide our world into powers of 2 We usually pick the nearest power of 2 For the date format “DD/MM/YYYY” 29 to 31 days in a month: 25 = 32 12 months in a year: 24 = 16 9999 possible years: 214 = 16384 So we would need 5+4+14=23 bits in total to represent a date like this This leaves many illegal bit combinations E.g. “32/16/9999”

Fixed-length Arithmetic We are used to working with numbers of any length Computers have to work with fixed-length numbers This causes some interesting problems Suppose we only have 3 decimal digits 000, 001, 002, … 999 Cannot represent the following integers: Negative numbers (no sign) Fractional numbers (no decimal point) Numbers larger than 999 (not enough digits)

Problems with Fixed-Length Arithmetic Fixed-length arithmetic is not closed For example, using 3 decimal digits: 600 + 600 = 1200 (too large) 003 - 005 = -2 (negative) 050 x 050 = 2500 (too large) 007 / 002 = 3.5 (not an integer) We can divide the problems into three main classes Overflow (too large) Underflow (too small) Not a member of the set

Binary-to-Decimal 000 0 = 0x22 + 0x21 + 0x20 Binary Decimal 001 1 = 0x22 + 0x21 + 1x20 010 2 = 0x22 + 1x21 + 0x20 011 3 = 0x22 + 1x21 + 1x20 100 4 = 1x22 + 0x21 + 0x20 101 5 = 1x22 + 0x21 + 1x20 110 6 = 1x22 + 1x21 + 0x20 111 7 = 1x22 + 1x21 + 1x20 By convention, the right-most bit is the least-significant Each subsequent bit is worth twice the one before We can build this idea into an algorithm

Decimal-to-Binary This involves the successive division of the decimal number by 2 The remainder at each stage is the next digit of the binary expansion E.g. Converting 58 into binary Divide 58 by 2 = 29 remainder 0 Divide 29 by 2 = 14 remainder 1 Divide 14 by 2 = 7 remainder 0 Divide 7 by 2 = 3 remainder 1 Divide 3 by 2 = 1 remainder 1 Divide 1 by 2 = 0 remainder 1 So 58 is 111010 in binary

Representations of Negative Integers There are four main ways of representing negative m-bit binary numbers Sign and magnitude 1’s Complement 2’s Complement Excess 2m-1 All of them have problems But all of them work well with binary addition and subtraction

Sign and Magnitude We use one bit for the sign, and the rest for the size of the number E.g. Using an 8-bit binary representation: 58 is 00111010 -58 is 10111010 (using sign-magnitude) The range is therefore –127 to 127 Problem: 00000000 = zero 10000000 = -zero

1’s Complement Also has a sign bit When negating a number, we simply flip every 0 to a 1 and visa versa E.g. Using an 8-bit binary representation: 58 is 00111010 -58 is 11000101 (using 1’s complement) The range is again –127 to 127 We also have two representations for zero 00000000 = zero 11111111 = -zero

2’s Complement Like 1’s Complement, but to negate a number we negate all the bits and add 1 to the result E.g. Using an 8-bit binary representation: 58 is 00111010 -58 is 11000110 (using 2’s complement) This gives us a range of –128 to 127 Only one representation of zero But we still have a problem: –(-128) = -(10000000) = 10000000 = -128 InnerInt

Excess 2m-1 We store each number as its sum plus 2m-1 E.g. Using an 8-bit binary representation, this is “excess 128” 58 is represented as 186, or 10111010 -58 is represented as 70, or 01000110 This is equivalent to 2’s complement with the sign-bit reversed The range is again –128 to 127 Still unequal

Fixed-Point Arithmetic It is useful to be able to represent real numbers, e.g. 1.23, 3.141, 0.0000123 … One way we can do this is to reserve a certain number of digits to be the fractional part: E.g. 1010.1100 = 10.75 (“.1100” = 2-1 + 2-2 = 0.5 + 0.25 = 0.75) Note that we don’t need any special hardware to deal with this We just need to keep track of the decimal point Given m bits before and n bits after the decimal point m determines the range n determines the precision

Problems with Fixed-point Arithmetic Fixed-point arithmetic is useful, but limited No good for extremely large or small numbers The mass of the Sun is 2x1033 grams The mass of an electron is 9x10-28 grams To use both these amounts in a calculation we would need over 60 decimal digits, most of which would be irrelevant We need floating-point arithmetic

Floating Point Arithmetic We borrow the familiar scientific notation to represent numbers: 3.14 = 0.314 x 101 0.0005 = 0.5 x 10-4 We use powers of 2 instead of 10: N = mantissa x 2exponent (-1 < mantissa < 1) Range depends on exponent Precision depends on mantissa This allows a huge range of numbers to be covered With some loss in accuracy IEEE floating-point standard 754: Single precision: 1 sign bit, 8 exponent, 23 mantissa Double precision: 1 sign bit, 11 exponent, 52 mantissa Reserved values for infinity and NAN (not a number) Inner Float

Problems with Floating-point Arithmetic Negative Overflow Positive Expressible Negative Numbers Positive Numbers Underflow The number line is divided into seven regions Using floating point we can only access three of them The range is not continuous, nor equally sampled 0.998x1099 and 0.999x1099 v.s. 0.998x100 and 0.999x100 Cannot express some numbers at all, e.g. 0.100x103 / 3 Need to round them to the nearest expressible number For a fixed number of bits, we must trade off the range against the precision of the representation Requires special hardware for fast performance

Summary Computers use a binary representation for numbers They are bound by the constraints of fixed-length arithmetic Representing negative numbers is something of a challenge No system is ideal We can represent real numbers using fixed-point or floating-point arithmetic Again, neither system is ideal