Topic 2 – Introduction to Computer Codes. Computer Codes A code is a systematic use of a given set of symbols for representing information. As an example,

Slides:



Advertisements
Similar presentations
2-1 Chapter 2 - Data Representation Computer Architecture and Organization by M. Murdocca and V. Heuring © 2007 M. Murdocca and V. Heuring Computer Architecture.
Advertisements

Chapter 2: Data Representation
Principles of Computer Architecture Miles Murdocca and Vincent Heuring Chapter 2: Data Representation.
ENGIN112 L4: Number Codes and Registers ENGIN 112 Intro to Electrical and Computer Engineering Lecture 4 Number Codes and Registers.
CHAPTER 2 Number Systems, Operations, and Codes
Digital Fundamentals Floyd Chapter 2 Tenth Edition
Chapter 1 Number Systems and Codes
Assembly Language and Computer Architecture Using C++ and Java
CS 151 Digital Systems Design Lecture 4 Number Codes and Registers.
Number Systems Standard positional representation of numbers:
Number Systems Decimal (Base 10) Binary (Base 2) Hexadecimal (Base 16)
CSCE 211: Digital Logic Design Chin-Tser Huang University of South Carolina.
VIT UNIVERSITY1 ECE 103 DIGITAL LOGIC DESIGN CHAPTER I NUMBER SYSTEMS AND CODES Reference: M. Morris Mano & Michael D. Ciletti, "Digital Design", Fourth.
Digital Fundamentals Floyd Chapter 2 Tenth Edition
S. Barua – CPSC 240 CHAPTER 2 BITS, DATA TYPES, & OPERATIONS Topics to be covered are Number systems.
Chapter 3 Data Representation part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2010.
Binary Representation and Computer Arithmetic
Dr. Bernard Chen Ph.D. University of Central Arkansas
Chapter3 Fixed Point Representation Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2009.
© 2009 Pearson Education, Upper Saddle River, NJ All Rights ReservedFloyd, Digital Fundamentals, 10 th ed Digital Fundamentals Tenth Edition Floyd.
ACOE1611 Data Representation and Numbering Systems Dr. Costas Kyriacou and Dr. Konstantinos Tatas.
Computer Arithmetic Nizamettin AYDIN
2-1 Chapter 2 - Data Representation Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Chapter Contents.
1 Digital Technology and Computer Fundamentals Chapter 1 Data Representation and Numbering Systems.
NUMBER REPRESENTATION CHAPTER 3 – part 3. ONE’S COMPLEMENT REPRESENTATION CHAPTER 3 – part 3.
Fixed-Point Arithmetics: Part II
Chapter 1 Data Storage(3) Yonsei University 1 st Semester, 2015 Sanghyun Park.
IT253: Computer Organization
Number Systems So far we have studied the following integer number systems in computer Unsigned numbers Sign/magnitude numbers Two’s complement numbers.
Computer Architecture
2-1 Chapter 2 - Data Representation Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Principles of Computer.
Number Systems Spring Semester 2013Programming and Data Structure1.
Digital Logic Design Lecture 3 Complements, Number Codes and Registers.
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Data Representation in Computer Systems
Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large numbers,
CS151 Introduction to Digital Design
Introduction to Computer Engineering ECE/CS 252, Fall 2010 Prof. Mikko Lipasti Department of Electrical and Computer Engineering University of Wisconsin.
9.4 FLOATING-POINT REPRESENTATION
Number Systems Decimal (Base 10) –10 digits (0,1,2,3,4,5,6,7,8,9) Binary (Base 2) –2 digits (0,1) Digits are often called bits (binary digits) Hexadecimal.
Lecture 5. Topics Sec 1.4 Representing Information as Bit Patterns Representing Text Representing Text Representing Numeric Values Representing Numeric.
1 EENG 2710 Chapter 1 Number Systems and Codes. 2 Chapter 1 Homework 1.1c, 1.2c, 1.3c, 1.4e, 1.5e, 1.6c, 1.7e, 1.8a, 1.9a, 1.10b, 1.13a, 1.19.
ECE 301 – Digital Electronics Unsigned and Signed Numbers, Binary Arithmetic of Signed Numbers, and Binary Codes (Lecture #2)
CSC 221 Computer Organization and Assembly Language
Number Systems Decimal (Base 10) –10 digits (0,1,2,3,4,5,6,7,8,9) Binary (Base 2) –2 digits (0,1) Digits are often called bits (binary digits) Hexadecimal.
1 Representation of Data within the Computer Oct., 1999(Revised 2001 Oct)
Monday, January 14 Homework #1 is posted on the website Homework #1 is posted on the website Due before class, Jan. 16 Due before class, Jan. 16.
Digital Fundamentals Tenth Edition Floyd Chapter 2 © 2008 Pearson Education.
DIGITAL SYSTEMS Number systems & Arithmetic Rudolf Tracht and A.J. Han Vinck.
CCE Department – Faculty of engineering - Islamic University of Lebanon Chapter 6 Binary Arithmetic.
Lecture 2 Number Representation, Overflow and Logic Topics Adders Math Behind Excess-3 Overflow Unsigned, signed-magnitude Two’s Complement Gray Code Boolean.
©Brooks/Cole, 2003 Chapter 3 Number Representation.
ECE DIGITAL LOGIC LECTURE 4: BINARY CODES Assistant Prof. Fareena Saqib Florida Institute of Technology Fall 2016, 01/26/2016.
Computer Math CPS120 Introduction to Computer Science Lecture 7.
Number Systems. The position of each digit in a weighted number system is assigned a weight based on the base or radix of the system. The radix of decimal.
N 3-1 Data Types  Binary information is stored in memory or processor registers  Registers contain either data or control information l Data are numbers.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
1 CE 454 Computer Architecture Lecture 4 Ahmed Ezzat The Digital Logic, Ch-3.1.
Fundamentals of Computer Science
Signed binary numbers & Binary Codes
Dr. Clincy Professor of CS
Lecture No. 4 Number Systems
Data Structures Mohammed Thajeel To the second year students
Data Representation Data Types Complements Fixed Point Representation
Dr. Clincy Professor of CS
Dr. Clincy Professor of CS
Presentation transcript:

Topic 2 – Introduction to Computer Codes

Computer Codes A code is a systematic use of a given set of symbols for representing information. As an example, a traffic light uses a code to signal what you need to do… Red = Stop Yellow = Caution; about to go to red Green = Go We will look at three important types of codes: numeric, character, and error detection & correction.

Fixed Point Numbers A fixed point number is used to represent either signed integers or signed fractions. In either case, sign magnitude, 2’s complement, or 1’s complement systems can be used to represent the signed value. A fixed point integer has an implied radix (decimal) point to the right of the least significant bit. A fixed point fraction has an implied radix point between the sign bit and the most significant magnitude bit.

Fixed Point Numbers...n-1n-2210 sign bit implied binary point. magnitude representation...n-1n-2210 sign bit implied binary point. magnitude representation (top) fixed point integer (bottom) fixed point fraction

Excess Representations An excess-K representation of a code C is formed by adding the value K to each code word of C. This sort of code is used frequently in the representation of the exponents of floating point numbers so that the smallest exponent value will be represented by all zeros. As an example, let’s compute the excess-8 code for 4-bit 2’s complement numbers (note all we have to do is add 8 = (1000) 2 to each code).

Excess-8 4-bit 2’s Complement Code Decimal2’s Complement Excess-8Decimal2’s Complement Excess

Floating-Point Numbers Floating-point numbers are similar to numbers written in scientific notation. In general, the floating-point form of a number N is written as: N = M x r E where M is the mantissa, a fixed-point number containing the significant digits of N, and E is the exponent, a fixed-point integer.

Mantissa Encoding In such a system, the mantissa and the exponent are generally coded separately. The radix is implied. The mantissa M is usually coded in sign- magnitude, usually as a fraction. It is usually written as: M = (S M.a n-1 a n-2 …a -m ) rsm The sign bit, S M is usually 0 for a positive number and 1 for a negative number.

Exponent Encoding The exponent E is often encoded in excess-K two’s complement notation. This representation of a number is formed by adding a bias of K to the 2’s complement integer value of the exponent. For binary floating-point numbers, K is usually selected to be 2 e-1 where e is the number of bits in the exponent. Therefore, -2 e-1 <= E < 2 e-1 0 <= E + 2 e-1 < 2 e

Exponent Encoding By examining this expression, we can see that the biased value of E is a number that ranges from 0 (at its most negative value) to 2 e –1 (at its most positive value). The excess-K form of E can be written as: E = (b e-1 b e-2 …b 0 ) excess-K The sign of E is indicated (in this form) by the bit b e-1, which will be 0 if E is negative and 1 if E is positive.

Normalization Note that more than one combination of mantissa and exponent can represent the same number. M = ( ) 2 x 2 4 = ( ) 2 x 2 5 = ( ) 2 x 2 6 In a digital system, it is useful to have one unique representation for each number. A floating-point number is normalized if the exponent is adjusted so that the mantissa has a nonzero value in its most significant digit.

Floating-Point Formats Floating-point formats used in computers often differ in the number of bits used to represent the mantissa and the exponent, and the method of coding for each. Most systems use a system where the sign is stored in the leftmost bit, followed by the exponent and then the mantissa. Typically, floating-point numbers are stored in one-word or two-word formats.

Floating-Point Formats A one-word format: A two-word format: Mantissa M (most significant part) Exponent ESMSM Mantissa MExponent ESMSM Mantissa M (least significant part)

IEEE Floating-Point Standards The Institute for Electrical and Electronic Engineers (IEEE) has defined a set of floating point standards. The single-precision IEEE standard calls for 32 bits, 1 sign bit, 23 mantissa bits, 8 exponent bits, and an exponent bias of 127. The single-precision IEEE standard calls for 64 bits, 1 sign bit, 52 mantissa bits, 11 exponent bits, and an exponent bias of 1023.

Binary Coding Decimal (BCD) The binary coded decimal (BCD) code is used for representing the decimal digits 0 through 9. It is a weighted code which means each bit position in the code have a fixed numerical weight associated with it. The digit represented by a code word is found by summing the weighted bits. BCD uses four bits, with the weights equal to those of a 4-bit binary integer (BCD is sometimes called a code).

BCD Code Words DigitBCD Code DigitBCD Code

Use of BCD Codes BCD codes are used to encode numbers for output to numerical displays (such as seven- segment displays). They are also used to represent numbers in processors which perform decimal arithmetic directly (instead of binary arithmetic). As an example, let’s encode the decimal number N=(9750) 10 in BCD. 9 -> 1001, 7 -> 0111, 5 -> 0101, 0 -> 0000 Therefore (9750) 10 = ( ) BCD.

ASCII The most widely used code for representing characters is ASCII. ASCII is a 7-bit code, frequently used with an 8 th bit for error detection (more about that in a bit). A complete ASCII table is located in your textbook, or all around the web. As an example, let’s encode ASCIIcode in ASCII.

ASCII Encoding CharacterASCII (binary) ASCII (hex) A S C I I c o F d e

Gray Codes A Gray code is defined as a code where two consecutive codewords differ in only one bit. The distance between two code words is defined as the number of bits in which the two words differ. Therefore, a Gray code has a distance of one. Let’s define a Gray code for the decimal numbers 0 through 15.

Gray Code Example DecimalBinaryGrayDecimalBinaryGray

Codes and Weights An error in binary data is defined as an incorrect value in one or more bits. A single error is an incorrect value in one bit and a multiple error is one or more bits being incorrect. Errors may be caused by hardware failure, external interference (noise), or other unwanted events. Certain types of codes allow the detection and sometimes the correction of errors.

Terminology C will refer to a code. I and J will refer to n-bit binary codewords. The weight of I, w(I), will be defined as the number of bits of I equal to 1. The distance between I and J, d(I, J) is equal to the number of bit positions in which I and J differ.

Terminology Example I = ( ) J = ( ) w(I) = 4 w(J) = 3 d(I, J) = 3

Error Detection and Correction Codes If the distance between any two code words in C is >= d min, the code is said to have minimum distance d min. For a given d min, at least d min errors are needed to transform one valid code word into another. If there are fewer than d min errors, a detectable noncode word results. If this noncode word is closer to one valid codeword than any other, the original code word can be deduced and the error corrected.

Error Detection and Correction Codes In general, a code provides t error correction plus detection of s additional errors if and only if the following inequality is satisfied… 2t + s + 1 <= d min

Error Detection and Correction 2t + s + 1 <= d min Examining this inequality shows: A single-error detection code (s=1, t=0) requires a minimum distance of 2. A single-error correction code (s=0, t=1) requires a minimum distance of 3. A single-error correction and double-error detection (s=t=1) requires a minimum distance of 4.

Parity Codes Parity codes are formed from a code C by concatenating (operator |) a parity bit, P to each code word of C. In an odd-parity code, the parity bit is specified to be either 0 or 1 as necessary for w(P|C) to be odd. In an even-parity code, the parity bit is specified to be either 0 or 1 as necessary for w(P|C) to be even. Information BitsP

Parity Code Example Concatenate a parity bit to the ASCII code for the characters 0, X, and = to produce both odd-parity and even-parity codes. CharacterASCIIOdd-Parity ASCII Even-Parity ASCII X =

Parity Code Effectiveness Error detection on a parity code is easily accomplished by checking to see if a codeword has the correct parity. For example, if the parity of an odd-parity codeword is actually even, an error has occurred in this codeword. Parity codes are minimum-distance-2 codes and thus can detect single errors. Unfortunately, errors in an even number of bits will not change the parity and are therefore not detectable using a parity code.

Two-out-of-Five Codes A two-out-of-five code is an error detection code having exactly two bits equal to 1 and three bits equal to 0. Error detection is accomplished by counting the number of ones in a code word. An error has occurred if this number is not equal to two. These codes permit the detection of single errors and multiple errors in adjacent bits.

Two-out-of-Five Code Example As an example, let’s look at a two-out- of-five code for decimal digits. DigitCodewordDigitCodeword

Hamming Codes A more complex error detection and correction system of codes was introduced by Richard Hamming in Hamming codes are similar to an extension of parity codes in that multiple parity, or check bits are used. Each check bit is defined over a subset of the information bits in a codeword. These subsets overlap such that each information bit is in at least two subsets.

Error Detection and Correction with Hamming Codes The error detection and correction properties of a Hamming code are determined by the number of check bits used and how the check bits are defined over the information bits. The minimum distance d min is equal to the weight of the minimum-weight nonzero code word (the number of ones in the codeword with the fewest). Hamming codes are complex in their formation and analysis. We will look at two different codes with different properties, but not discuss them in too much depth.

Hamming Code Example #1 This code has d min =3, so it can provide single error correction. Information Word Hamming Code Information Word Hamming Code Information Word Hamming Code

Hamming Code Example #1 Here, let’s assume we have a single error in the codeword so that we read it as Only the codeword in which the error occurred has distance 1 from the invalid word. So, the detection of the error word is equivalent to correcting the error, since the only possible single error that could have produced this error word is the codeword So, this code offers single error correction.

Hamming Code Example #2 This code has d min =4, so it can provide single error correction and double error detection. Information Word Hamming Code Information Word Hamming Code Information Word Hamming Code

Summary You should now understand: Fixed-point and floating-point number representation BCD and ASCII character codes Gray codes and excess-k codes Simple error detection and error correction codes, including parity codes, 2-out-of-5 codes, and Hamming codes