Presentation is loading. Please wait.

Presentation is loading. Please wait.

STATISTIC & INFORMATION THEORY (CSNB134) MODULE 8 INTRODUCTION TO INFORMATION THEORY.

Similar presentations


Presentation on theme: "STATISTIC & INFORMATION THEORY (CSNB134) MODULE 8 INTRODUCTION TO INFORMATION THEORY."— Presentation transcript:

1 STATISTIC & INFORMATION THEORY (CSNB134) MODULE 8 INTRODUCTION TO INFORMATION THEORY

2 Recaps.. In the Course Overview, it has been highlighted that this course is divided into two main parts:- (1) understanding fundamental statistics (2) understanding basic information theory In the first seven modules (i.e. Module 1, Module 2, ……., Module 7) we have covered the first part of understanding fundamental statistics. Thus, the remaining modules is about understanding basic information theory. In this module, students will be taught on the introduction of Information Theory applied in

3 Model of Information Theory The most basic model of Information Theory is: where information is generated at the source, send through a channel, and consumed in the drain. Often information needs to go through several processes of coding before it can be send through the channel (i.e. information is coded through several processes before it is transmitted through the channel). The ‘coded’ information then needs to be decoded at the receiving-end in order to convert it back to its original form, as such the receiver can read / view its content.

4 Model of Information Theory (cont.) Thus, the previous basic model of Information Theory can be expanded further as follows: Source coding is the coding mechanism that transform information to its appropriate format of representation (e.g. as a text document, JPEG image, MP3 audio etc.) Channel coding is the coding mechanism that transform the information into its appropriate format for transportation which suits the capacity of the channel

5 Basic Form of Digital Information In the digital world, information is represented in binary digits, which is know as bits. A bit can either be a ‘1’ or a ‘0’, which is of base 2. Since information is often represented by huge number of bits, thus we often quote them in terms of bytes, where:

6 Channel Capacity We mentioned previously, that information is generated at the source, sent through the channel and consumed in the drain. The analogy of channel capacity is similar to the example of a pipe channel water to fill up the basin. The time takes to fill up the basin depends very much on the diameter of the pipe. Similarly, a channel with more capacity can transmit more information from the source to the drain within a specified period of time.

7 Channel Capacity (cont.) If a channel has a capacity to transmit N bits/s, and M bits have to be sent, it takes M/N seconds for the message to get through. If the message is continuous ('Streaming Audio' or 'Streaming Video'), the channel capacity must be at least as large as the data rate (bits/s). How long does it take to send an e-mail of 12.4 kByte across a channel of 56 kbit/s? (12.4*8kbit) / (56 kbit/s) = 1.8s When a channel can take 8 Mbyte/s; how many audio signals of 128 kbit/s can it carry? (8*1024*8kbit/s) / (128kbit/s) = 512

8 Code Representation Previously, we learned that the basic form of digital information is in bits. If we have 2 bits, we can derive the following table: which implies that we can represent 4 symbols with 2 bits. SymbolBit1Bit0 S000 S101 S210 S311

9 Code Representation (cont.) How many symbols we can represent with 3 bits? We can represent 2 n symbols with n number of bits SymbolBit2Bit1Bit0 S0000 S1001 S2010 S3011 S4100 S5101 S6110 S7111 We can represent 8 symbols with 3 bits

10 Code Representation (cont.) How many bits do we need to transmit 128 different symbols? How many bits needed to represent 100 symbols? We can represent 2 n symbols with n number of bits Thus, 2 7 = 128, n = 7 bits to transmit both 128 and 100 symbols The answer to the last question shows that sometimes we can transmit more symbols with the given number of bits.

11 Code Representation (cont.) Next we shall learn about several types of binary code representation which include: (i) Binary Code (ii) BCD Code (iii) Hex Code (iv) ASCII Code

12 Binary Code Binary code is a straight forward transformation of numbers into its equivalent binary format. For example 4 and 7 is represented as ‘100’ and ‘111’ in their binary code formats. What is the binary representation of 42819? Integer 2 15 2 14 2 13 2 12 2 11 2 10 2 9 2 8 2 7 2 6 2 5 2 4 2 3 2 2 2 1 2 0 Binary 1 0 1 0 0 1 1 1 0 1 0 0 0 0 1 1 42819 = 2 15 + 2 13 + 2 10 + 2 9 + 2 8 + 2 6 + 2 1 + 2 0 = 32768 + 8192 + 1024 + 512 + 256 + 64 + 2 + 1 = 42819

13 BCD (Binary Coded Decimal) Code In BCD code, each digit of a number is independently converted into its own binary representation. For example, the BCD representation of integer 42819 is: BCD is sub-optimal, it requires 20 bits to represent 42819 as compared to 16 bits of binary code! Integer42819 Binary01000010010000011001 Note: each digit is independently converted into binary digit Note: 4 bits are needed to sufficiently represent number 0 to 9

14 BCD (Binary Coded Decimal) Code (cont.) Whichever number represented in BCD, only 10 out of 16 symbols are not used. This results is 37.5% ‘wastage’ (i.e. 6/16*100%) SymbolBit3Bit2Bit1Bit0 Num 00000 Num 10001 Num 20010 Num 30011 Num 40100 Num 50101 Num 60110 Num 70111 Num 81000 Num 91001 Unused1010 1011 1100 1101 1110 1111

15 Hexadecimal Code The main limitation of BCD is due to the fact that we are converting decimal numbers which only consists of 10 symbols (i.e. from 0 to 9 or also known as based-10) to its binary equivalent of 4 bits, resulting in wastage of 6 symbols. This limitation is overcome by hexadecimal code which is based on hexadecimal numbers that consists of 16 symbols (i.e. from 0 to 15 also known as based 16). Thus, 16 symbols by using 4 bits can be fully utilized!

16 Hexadecimal Code (cont.) DecimalHexadecimalBit3Bit2Bit1Bit0 000000 110001 220010 330011 440100 550101 660110 770111 881000 991001 10A1010 11B1011 12C1100 13D1101 14E1110 15F1111  Previously we have learned that there are 8 bits in a byte.  Thus, we can represent 2 hexadecimal digits in a byte (i.e. 1 hexadecimal digit require 4 bits) 11111111 = FF in hex = (15 * 16 1 ) + (15 *16 0 ) decimal = 255 decimal

17 ASCII Code ASCII (American Standard Code for Information Interchange), is a character encoding based on the American alphabet. Here 7 bits out of a byte is considered sufficient to represent the following 95 printable symbols, in addition to another 33 control characters (e.g. DELETE, etc.) The last bit is used as a parity bit which provides for a single bit error detection capability.

18 ASCII Code (cont.)

19 Comparisons of Code Representation A decimal number of 54276 can be represented as: CodeEquivalent Representation Decimal5427654276 BCD01010100001001110110 Binary1101010000000100 HexD404D404 Hex (Binary)1101010000000100 ASCII (Hex)3534323736 ASCII (Dec)5352505554 ASCII (Binary)01101010110100011001001101110110110 Note: ASCII needs more data for transmission, but is valuable because it can represent many more symbols!

20 STATISTIC & INFORMATION THEORY (CSNB134) INTRODUCTION TO INFORMATION THEORY --END--


Download ppt "STATISTIC & INFORMATION THEORY (CSNB134) MODULE 8 INTRODUCTION TO INFORMATION THEORY."

Similar presentations


Ads by Google