Arithmetic for Computers

Slides:



Advertisements
Similar presentations
Spring 2013 Advising Starts this week! CS2710 Computer Organization1.
Advertisements

Fixed Point Numbers The binary integer arithmetic you are used to is known by the more general term of Fixed Point arithmetic. Fixed Point means that we.
Chapter Three.
Lecture 11 Oct 12 Circuits for floating-point operations addition multiplication division (only sketchy)
Computer Organization CS224 Fall 2012 Lesson 19. Floating-Point Example  What number is represented by the single-precision float …00 
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Lecture 16: Computer Arithmetic Today’s topic –Floating point numbers –IEEE 754 representations –FP arithmetic Reminder –HW 4 due Monday 1.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3: IT Students.
Floating Point Numbers
1 Lecture 9: Floating Point Today’s topics:  Division  IEEE 754 representations  FP arithmetic Reminder: assignment 4 will be posted later today.
Chapter 3 Arithmetic for Computers. Multiplication More complicated than addition accomplished via shifting and addition More time and more area Let's.
Integer Arithmetic Floating Point Representation Floating Point Arithmetic Topics.
ECE 15B Computer Organization Spring 2010 Dmitri Strukov Lecture 11: Floating Point Partially adapted from Computer Organization and Design, 4 th edition,
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Hao Ji.
Systems Architecture Lecture 14: Floating Point Arithmetic
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Number Systems II Prepared by Dr P Marais (Modified by D Burford)
CEN 316 Computer Organization and Design Computer Arithmetic Floating Point Dr. Mansour AL Zuair.
Computing Systems Basic arithmetic for computers.
ECE232: Hardware Organization and Design
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /14/2013 Lecture 16: Floating Point Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
Computer Arithmetic II Instructor: Mozafar Bag-Mohammadi Spring 2006 University of Ilam.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
Lecture 9: Floating Point
Computer Arithmetic II Instructor: Mozafar Bag-Mohammadi Ilam University.
Floating Point Representation for non-integral numbers – Including very small and very large numbers Like scientific notation – –2.34 × –
Lecture 12: Integer Arithmetic and Floating Point CS 2011 Fall 2014, Dr. Rozier.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
CDA 3101 Fall 2013 Introduction to Computer Organization
Chapter 3 Arithmetic for Computers. Chapter 3 — Arithmetic for Computers — 2 Arithmetic for Computers Operations on integers Addition and subtraction.
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 3 Arithmetic for Computers.
CSE 340 Simulation Modeling | MUSHFIQUR ROUF CSE340:
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
Floating Point Representations
Morgan Kaufmann Publishers Arithmetic for Computers
2.4. Floating Point Numbers
Computer Architecture & Operations I
Integer Division.
Morgan Kaufmann Publishers Arithmetic for Computers
Lecture 9: Floating Point
Topics IEEE Floating Point Standard Rounding Floating Point Operations
Floating Point Numbers: x 10-18
Floating Point Number system corresponding to the decimal notation
IEEE floating point format
CS/COE0447 Computer Organization & Assembly Language
Morgan Kaufmann Publishers Arithmetic for Computers
Morgan Kaufmann Publishers
April 2006 Saeid Nooshabadi
PRESENTED BY J.SARAVANAN. Introduction: Objective: To provide hardware support for floating point arithmetic. To understand how to represent floating.
Chapter 6 Floating Point
Lecture 10: Floating Point, Digital Design
ECE/CS 552: Floating Point
CSCE 350 Computer Architecture
Number Representations
Arithmetic Logical Unit
How to represent real numbers
Computer Arithmetic Multiplication, Floating Point
October 17 Chapter 4 – Floating Point Read 5.1 through 5.3 1/16/2019
Chapter 3 Arithmetic for Computers
Floating Point Numbers
Morgan Kaufmann Publishers Arithmetic for Computers
Fundamentals of Digital Logic and Number Systems
Computer Organization and Assembly Language
Number Representations
Lecture 9: Shift, Mult, Div Fixed & Floating Point
Presentation transcript:

Arithmetic for Computers Chapter 3 Arithmetic for Computers

Morgan Kaufmann Publishers 14 September, 2018 Floating Point §3.5 Floating Point Representation for non-integral numbers Including very small and very large numbers Like scientific notation –2.34 × 1056 +0.002 × 10–4 +987.02 × 109 Normalized number: A number in floating-point notation that has no leading 0s In binary ±1.xxxxxxx2 × 2yyyy Types float and double in C Floating point: Computer arithmetic that represents numbers in which the binary point is not fixed normalized not normalized Chapter 3 — Arithmetic for Computers

LEGv8 floating-point number Floating point numbers are of the form (-1)s x F x 2E S: sign bit (0  non-negative, 1  negative), E: exponent, F: fraction Numbers representable in computer Smallest fractions 2.0ten × 10−38 Largest numbers 2.0ten × 1038 overflow (floating-point): a positive exponent becomes too large to fit in the exponent field. Raises exception underflow (floating-point): a negative exponent becomes too large to fit in the exponent field. Raises interrupt 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 S exponent fraction 1 bit 8 bits 23 bits

Floating Point Standard Morgan Kaufmann Publishers Floating Point Standard 14 September, 2018 Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit) IEEE 754 makes the leading 1 bit of normalized binary numbers implicit. Single precision: 24 bits (implied 1 and a 23-bit fraction), and double precision: 53 bits (1 +52). Significand: represents the 24- or 53-bit number (1 plus the fraction) Fraction: 23- or 52-bit number. Since 0 has no leading 1, it is given the reserved exponent value 0 so that the hardware won’t attach a leading 1 to it. Chapter 3 — Arithmetic for Computers

IEEE 754 Floating Point Standard 00 … 00two represents 0; Representation of the rest of the numbers: (-1)s x (1+Fraction) x 2E Special symbols to represent unusual events. For example, instead of interrupting on a divide by 0, software can set the result to a bit pattern representing +∞ or −∞; the largest exponent is reserved for these special symbols. NaN (Not a Number): symbol for the result of invalid operations, such as 0/0 or subtracting infinity from infinity.

LEGv8 floating-point number Floating point numbers are of the form (-1)s x F x 2E S: sign bit (0  non-negative, 1  negative), E: exponent, F: fraction Numbers representable in computer Smallest fractions 2.0ten × 10−38 Largest numbers 2.0ten × 1038 overflow (floating-point): a positive exponent becomes too large to fit in the exponent field. Raises exception underflow (floating-point): a negative exponent becomes too large to fit in the exponent field. Raises interrupt 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 S exponent fraction 1 bit 8 bits 23 bits

Floating Point Standard Morgan Kaufmann Publishers Floating Point Standard 14 September, 2018 Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit) IEEE 754 makes the leading 1 bit of normalized binary numbers implicit. Single precision: 24 bits (implied 1 and a 23-bit fraction), and double precision: 53 bits (1 +52). Significand: represents the 24- or 53-bit number (1 plus the fraction) Fraction: 23- or 52-bit number. Since 0 has no leading 1, it is given the reserved exponent value 0 so that the hardware won’t attach a leading 1 to it. Chapter 3 — Arithmetic for Computers

IEEE 754 Floating Point Standard 00 … 00two represents 0; Representation of the rest of the numbers: (-1)s x (1+Fraction) x 2E Special symbols to represent unusual events. For example, instead of interrupting on a divide by 0, software can set the result to a bit pattern representing +∞ or −∞; the largest exponent is reserved for these special symbols. NaN (Not a Number): symbol for the result of invalid operations, such as 0/0 or subtracting infinity from infinity.

IEEE 754 Floating Point Standard Floating-point representation to process integer comparisons Consequence: sign is in the most significant bit Placing the exponent before the significand simplifies the sorting of floating-point numbers using integer comparison instructions numbers with bigger exponents look larger than numbers with smaller exponents, as long as both exponents have the same sign Negative exponents pose a challenge to simplified sorting Single precision representation of 1.0two x 2-1: exponent looks like a big number The value 1.0two × 2+1 would look like the smaller binary number 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

IEEE 754 Floating-Point Format Morgan Kaufmann Publishers IEEE 754 Floating-Point Format 14 September, 2018 single: 8 bits double: 11 bits single: 23 bits double: 52 bits S Exponent Fraction Most negative exponent represented as 00000000 and most positive exponent represented as 11111111 Biased notation: Exponent: unsigned representation = actual exponent + Bias Ensures exponent is unsigned Single precision: Bias = 127; Double precision: Bias = 1203 exponent −1 represented by −1 +127ten = 126ten = 0111 1110two, and Exponent +1 represented by 1 +127 = 128ten = 1000 0000two Chapter 3 — Arithmetic for Computers

Single-Precision Range Morgan Kaufmann Publishers 14 September, 2018 Single-Precision Range Exponents 0000…00 and 1111…11 reserved Smallest value Exponent: 00000001  actual exponent = 1 – 127 = –126 Fraction: 000…00  significand = 1.0 ±1.0 × 2–126 ≈ ±1.2 × 10–38 Largest value exponent: 11111110  actual exponent = 254 – 127 = +127 Fraction: 111…11  significand ≈ 2.0 ±2.0 × 2+127 ≈ ±3.4 × 10+38 Chapter 3 — Arithmetic for Computers

Double-Precision Range Morgan Kaufmann Publishers 14 September, 2018 Double-Precision Range Exponents 0000…00 and 1111…11 reserved Smallest value Exponent: 00000000001  actual exponent = 1 – 1023 = –1022 Fraction: 000…00  significand = 1.0 ±1.0 × 2–1022 ≈ ±2.2 × 10–308 Largest value Exponent: 11111111110  actual exponent = 2046 – 1023 = +1023 Fraction: 111…11  significand ≈ 2.0 ±2.0 × 2+1023 ≈ ±1.8 × 10+308 Chapter 3 — Arithmetic for Computers

Floating-Point Precision Morgan Kaufmann Publishers 14 September, 2018 Floating-Point Precision Relative precision all fraction bits are significant Single: approx 2–23 Equivalent to 23 × log102 ≈ 23 × 0.3 ≈ 6 decimal digits of precision Double: approx 2–52 Equivalent to 52 × log102 ≈ 52 × 0.3 ≈ 16 decimal digits of precision Chapter 3 — Arithmetic for Computers

Floating-Point Example Morgan Kaufmann Publishers 14 September, 2018 Floating-Point Example Represent –0.75 –0.75 = (–1)1 × 1.12 × 2–1 S = 1 Fraction = 1000…002 Exponent = –1 + Bias Single: –1 + 127 = 126 = 011111102 Double: –1 + 1023 = 1022 = 011111111102 Single: 1011111101000…00 Double: 1011111111101000…00 Chapter 3 — Arithmetic for Computers

Floating-Point Example Morgan Kaufmann Publishers 14 September, 2018 Floating-Point Example What number is represented by the single-precision float 11000000101000…00 S = 1 Fraction = 01000…002 = 1 ×2−2 = ¼ = 0.25 exponent = 100000012 = 129 x = (–1)1 × (1 + 012) × 2(129 – 127) = (–1) × 1.25 × 22 = –5.0 Chapter 3 — Arithmetic for Computers

Morgan Kaufmann Publishers Denormal Numbers 14 September, 2018 Exponent = 000...0  hidden bit is 0 Smaller than normal numbers allow for gradual underflow, with diminishing precision Denormal with fraction = 000...0 Two representations of 0.0! Chapter 3 — Arithmetic for Computers

Morgan Kaufmann Publishers 14 September, 2018 Infinities and NaNs Exponent = 111...1, Fraction = 000...0 ±Infinity Can be used in subsequent calculations, avoiding need for overflow check Exponent = 111...1, Fraction ≠ 000...0 Not-a-Number (NaN) Indicates illegal or undefined result e.g., 0.0 / 0.0 Can be used in subsequent calculations Chapter 3 — Arithmetic for Computers

Floating-Point Addition Morgan Kaufmann Publishers 14 September, 2018 Floating-Point Addition Consider a 4-digit decimal example 9.999 × 101 + 1.610 × 10–1 1. Align decimal points Shift number with smaller exponent 9.999 × 101 + 0.016 × 101 2. Add significands 9.999 × 101 + 0.016 × 101 = 10.015 × 101 3. Normalize result & check for over/underflow 1.0015 × 102 4. Round and renormalize if necessary 1.002 × 102 Chapter 3 — Arithmetic for Computers

Algorithm for binary floating-point addition

Binary Floating-Point Addition Morgan Kaufmann Publishers 14 September, 2018 Binary Floating-Point Addition Now consider a 4-digit binary example 1.0002 × 2–1 + –1.1102 × 2–2 (0.5 + –0.4375) 1. Align binary points Shift number with smaller exponent 1.0002 × 2–1 + –0.1112 × 2–1 2. Add significands 1.0002 × 2–1 + –0.1112 × 2–1 = 0.0012 × 2–1 3. Normalize result & check for over/underflow 1.0002 × 2–4, Since 127 ≥−4 ≥−126, no overflow/underflow 4. Round and renormalize if necessary 1.0002 × 2–4 (no change) = 0.0625 Chapter 3 — Arithmetic for Computers

Morgan Kaufmann Publishers 14 September, 2018 FP Adder Hardware Much more complex than integer adder Doing it in one clock cycle would take too long Much longer than integer operations Slower clock would penalize all instructions FP adder usually takes several cycles Can be pipelined Chapter 3 — Arithmetic for Computers

Morgan Kaufmann Publishers FP Adder Hardware 14 September, 2018 Step 1 Step 2 Step 3 Step 4 Chapter 3 — Arithmetic for Computers

Floating-Point Multiplication Morgan Kaufmann Publishers 14 September, 2018 Floating-Point Multiplication Consider a 4-digit decimal example 1.110 × 1010 × 9.200 × 10–5 1. Add exponents For biased exponents, subtract bias from sum New exponent = 10 + –5 = 5 2. Multiply significands 1.110 × 9.200 = 10.212  10.212 × 105 3. Normalize result & check for over/underflow 1.0212 × 106 4. Round and renormalize if necessary 1.021 × 106 5. Determine sign of result from signs of operands +1.021 × 106 Chapter 3 — Arithmetic for Computers

Floating-Point Multiplication

Binary Floating-Point Multiplication Morgan Kaufmann Publishers 14 September, 2018 Binary Floating-Point Multiplication Now consider a 4-digit binary example 1.0002 × 2–1 × –1.1102 × 2–2 (0.5 × –0.4375) 1. Add exponents Unbiased: –1 + –2 = –3 Biased: (–1 + 127) + (–2 + 127) = –3 + 254 – 127 = –3 + 127 2. Multiply significands 1.0002 × 1.1102 = 1.1100  1.1102 × 2–3 3. Normalize result & check for over/underflow 1.1102 × 2–3 (no change), since 127 ≥−3 ≥−126, no overflow/underflow 4. Round and renormalize if necessary 1.1102 × 2–3 (no change) 5. Determine sign: +ve × –ve  –ve –1.1102 × 2–3 = - 0.00111two = - 7/25ten = - 7/32ten = –0.21875ten Chapter 3 — Arithmetic for Computers

FP Arithmetic Hardware Morgan Kaufmann Publishers 14 September, 2018 FP Arithmetic Hardware FP multiplier is of similar complexity to FP adder But uses a multiplier for significands instead of an adder FP arithmetic hardware usually does Addition, subtraction, multiplication, division, reciprocal, square-root FP  integer conversion Operations usually takes several cycles Can be pipelined Chapter 3 — Arithmetic for Computers

FP Instructions in LEGv8 Morgan Kaufmann Publishers 14 September, 2018 FP Instructions in LEGv8 Separate FP registers 32 single-precision: S0, …, S31 32 double-precision: DS0, …, D31 Sn stored in the lower 32 bits of Dn FP instructions operate only on FP registers Programs generally don’t do integer ops on FP data, or vice versa More registers with minimal code-size impact FP load and store instructions LDURS, LDURD STURS, STURD Chapter 3 — Arithmetic for Computers

FP Instructions in LEGv8 Morgan Kaufmann Publishers 14 September, 2018 FP Instructions in LEGv8 Single-precision arithmetic FADDS, FSUBS, FMULS, FDIVS e.g., FADDS S2, S4, S6 Double-precision arithmetic FADDD, FSUBD, FMULD, FDIVD e.g., FADDD D2, D4, D6 Single- and double-precision comparison FCMPS, FCMPD Sets or clears FP condition-code bits Branch on FP condition code true or false B.cond Chapter 3 — Arithmetic for Computers

Morgan Kaufmann Publishers 14 September, 2018 FP Example: °F to °C C code: float f2c (float fahr) { return ((5.0/9.0)*(fahr - 32.0)); } fahr in S12, result in S0, literals in global memory space Compiled LEGv8 code: f2c: LDURS S16, [X27,const5] // S16 = 5.0 (5.0 in memory) LDURS S18, [X27,const9] // S18 = 9.0 (9.0 in memory) FDIVS S16, S16, S18 // S16 = 5.0 / 9.0 LDURS S18, [X27,const32] // S18 = 32.0 FSUBS S18, S12, S18 // S18 = fahr – 32.0 FMULS S0, S16, S18 // S0 = (5/9)*(fahr – 32.0) BR LR // return Chapter 3 — Arithmetic for Computers