Download presentation
Presentation is loading. Please wait.
1
Introduction to Information Theory- Entropy
Subject: Information Theory and Coding By Professor Dr. Ayad Ghany Ismaeel for Dept. of computer Technical Eng./ Computer Comm. network . 6/25/2018
2
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Topics Model of a Digital Communication System Communication Channel Shannon’s Definition of Communication Shannon want to maximizing the speed of ADSL at your home “Information Theory” or “The Shannon Theory” In terms of Information Theory Terminology Measurement of Information Entropy Discrete Continues References 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
3
Model of a Digital Communication System
Father of Digital Communication Message e.g. English symbols Encoder e.g. English to 0,1 sequence Information Source Coding Communication Channel Destination Decoding Can have noise or distortion Decoder e.g. 0,1 sequence to English 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
4
Communication Channel Includes
6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
5
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
And even this… 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
6
Shannon’s Definition of Communication
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.” “Frequently the messages have meaning” “... [which is] irrelevant to the engineering problem.” 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
7
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Shannon Wants to… Shannon wants to find a way for “reliably” transmitting data throughout the channel at “maximal” possible rate. Information Source Coding Communication Channel Destination Decoding For example, maximizing the speed of your home 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
8
And he thought about this problem for a while…
He later on found a solution and published in this 1948 paper. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
9
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
In his 1948 paper he build a rich theory to the problem of reliable communication, now called “Information Theory” or “The Shannon Theory” in honor of him. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
10
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Shannon’s Vision Data Source Encoding Channel Encoding Channel User Source Decoding Channel Decoding 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
11
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Example: Disk Storage Data Zip Add CRC Channel User Unzip Verify CRC 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
12
In terms of Information Theory Terminology
Zip Source Encoding = Data Compression Unzip Source Decoding = Data Decompression Add CRC Channel Encoding Error Protection/ cyclic redundancy check CRC error: indicates that some data in your Zip file (.zip or .zipx) is damaged. = Verify CRC Channel Decoding = Error Correction 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
13
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Shannon Theory The original 1948 Shannon Theory contains: Measurement of Information Source Coding Theory Channel Coding Theory 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
14
Measurement of Information
Shannon’s first question is “How to measure information in terms of bits?” = ? bits = ? bits 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
15
All events are probabilistic!
Using Probability Theory, Shannon showed that there is only one way to measure information in terms of number of bits: called the entropy function: is the expected value (average) of the information contained in each message. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
16
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
For example Tossing a dice: Outcomes are 1,2,3,4,5,6 Each occurs at probability 1/6 Information provided by tossing a dice is 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
17
Wait! It is nonsense! The number 2.585-bits is not an integer!!
What does you mean? 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
18
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
For example, in the experience of throwing a piece normal money. The possibility of the emergence of writing is equal to the possibility of the appearance of the logo. And therefore (we'll use a logarithm base 2): p1 = 0.5, h1 = i(p1) = – log 0.5 = 1 p2 = 0.5, h2 = i(p2) = -log 0.5 = 1 And thus the average amount of information and entropy is: H =( p1) (i1) + (p2)( i2) = 0.5(1) + 0.5(1) = 1 In general, the average amount of information can be the expense of the relationship: Shannon proposed the one to decide (Bits) If we use the logarithm for the basis of 2 per account Entropy. Bit here means the amount of information that we get from this bit. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
19
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Example Suppose you have: p1 = 0.5 Chance of Rain Tomorrow p2 = possibility that impair dinner on my account Does this reflect continued logarithm idea of the amount of information? Yes, fortunately. It achieves the following: log ( 1 ) = 0 log (0) = - infinity And thus it can be used as a measure of the value of information if multiplied by (-1) which gives zero when the event does not contain any surprise (we know it is falling pieces): i(p) = – log(p) To see how it behaves in the previous example, we'll use temporarily the natural logarithm: h1 = i(p1) = – ln(0.5) = 0.693 h2 = i(p2) = -ln(0.001) = 6.9 h = h1 + h2 = = 7.593 h’ = i(p) = -ln(p1.p2) = -ln(p1) – ln(p2) = h1 + h2 = h 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
20
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Continuous Entropy 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
21
Continuous Entropy (Cont.)
An Example The Uniform Distribution, allow (f ) to be the uniform distribution on [a, b]. That is: Informally, the continuous entropy of the uniform distribution is equal to the log of the width of the interval. note that h(X) can be negative! For example: if X is uniformly distributed in [0; 1/2 ], then h(X) = log ( ½- 0) = log 1/2 = -1. If entropy can be negative. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
22
Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
References Richard W. Hamming, “Coding and Information Theory” , 2nd Edition, Prentice-Hall, New Jersey-USA, 1986. Mauro Barni and Benedetta Tondi, “Lecture notes on: Information Theory and Coding”, Universita degli Studi di Siena Facolta di Ingegneria, 2012. 6/25/2018 Prof. Dr. Ayad Barznji Information Theory-Entropy/ L01+02
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.