Subject Name: Information Theory Coding Subject Code: 10EC55

Slides:



Advertisements
Similar presentations
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Advertisements

Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Sampling and Pulse Code Modulation
Information Theory EE322 Al-Sanie.
Capacity of Wireless Channels
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Chapter 6 Information Theory
Fundamental limits in Information Theory Chapter 10 :
Introduction to Cognitive radios Part two HY 539 Presented by: George Fortetsanakis.
ECE 776 Information Theory Capacity of Fading Channels with Channel Side Information Andrea J. Goldsmith and Pravin P. Varaiya, Professor Name: Dr. Osvaldo.
Noise, Information Theory, and Entropy
X= {x 0, x 1,….,x J-1 } Y= {y 0, y 1, ….,y K-1 } Channel Finite set of input (X= {x 0, x 1,….,x J-1 }), and output (Y= {y 0, y 1,….,y K-1 }) alphabet.
Noise, Information Theory, and Entropy
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
§1 Entropy and mutual information
Information Theory & Coding…
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy.
§4 Continuous source and Gaussian channel
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Channel Capacity.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
§2 Discrete memoryless channels and their capacity function
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
1 On the Channel Capacity of Wireless Fading Channels C. D. Charalambous and S. Z. Denic School of Information Technology and Engineering, University of.
One Function of Two Random Variables
INFORMATION THEORY Pui-chor Wong.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Channel Capacity.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
Chapter 4: Information Theory. Learning Objectives LO 4.1 – Understand discrete and continuous messages, message sources, amount of information and its.
Modulation and Coding Trade Offs Ramesh Kumar Lama.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Lecture 1.31 Criteria for optimal reception of radio signals.
3. Random Variables (Fig.3.1)
ECE 313 Probability with Engineering Applications Lecture 7
12. Principles of Parameter Estimation
Introduction to Information theory
Advanced Wireless Networks
5 Systems of Linear Equations and Matrices
Subject Name: Digital Communication Subject Code:10EC61
S Digital Communication Systems
Subject Name: Information Theory Coding Subject Code: 10EC55
Digital Multimedia Coding
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Copyright © Cengage Learning. All rights reserved.
Virtual University of Pakistan
Sampling Theorems- Nyquist Theorem and Shannon-Hartley Theorem
Dept. of Electrical & Computer engineering
8. One Function of Two Random Variables
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Information Theoretical Analysis of Digital Watermarking
12. Principles of Parameter Estimation
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
8. One Function of Two Random Variables
Watermarking with Side Information
Continuous Random Variables: Basics
Presentation transcript:

Subject Name: Information Theory Coding Subject Code: 10EC55 Prepared By: Shima Ramesh,Pavana .H Department: ECE Date: 10/11/2014 11/22/2018 MVJCE

UNIT 3 Fundamental limits on Performance 11/22/2018 MVJCE

Discrete memory less channel Mutual information Channel capacity. Topics to be covered Source coding theorem Huffman coding Discrete memory less channel Mutual information Channel capacity. 11/22/2018 MVJCE

Discrete memoryless channel Communication network may be described by specified by joint probabilities matrix (JPM). JPM of above system can be written as 11/22/2018 MVJCE

Sum of all entries of JPM Sum of all entries of JPM in the kth r Sum of all entries of JPM in jth column 11/22/2018 MVJCE

Huffman’s minimum redundancy code: Source coding theorem. Huffman’s minimum redundancy code: Huffman has suggested simple that guarantees an optimal code (i) the procedure consists of step-step by reduction of original source followed by code construction, starting with final the final reduced source and working backward to original source. The procedure has α steps: 11/22/2018 MVJCE

Procedure is as follows List the source symbols in decreasing order of probabilities. Check if q= r+α(r-1) is satisfied and find the integer α. Club the last r symbols into one composite symbols whose probability of occurrence is equal to the sum of the probabilities of occurrence of last r symbols. Repeat step 1 and 3 until exactly r symbols are left. Assign codes freely to the last r composite symbols and work backwards to the original source to arrive at the optimum codes. Discard the codes of dummy symbols. 11/22/2018 MVJCE

Accordingly there are five entropy Functions For simple communication networks there are five probability schemes of interest: P(X), P(Y), P(X,Y),P(X/Y) and P(Y/X) Accordingly there are five entropy Functions H(X): Average information per character or symbol transmitted by the source or the entropy of the source H(Y): Average information received per character at the receiver or entropy of the receiver. H(X,Y): Average information per pair of transmitted and received characters H(X/Y) or H(Y/X): Measure of information about the receiver. 11/22/2018 MVJCE

. similarly From the equation mentioned below H(X) & H(Y) Joint and conditional entropies . From the equation mentioned below H(X) & H(Y) Can be written as similarly 11/22/2018 MVJCE

The average conditional entropy or Equivocation Equivocation specifies the average amount of information Needed to specify an input character provided we are allowed To make observation of the output produced by the input. 11/22/2018 MVJCE

The relation between entropies are given as similarly 11/22/2018 MVJCE

If and only if input symbols and output symbols are Are statistically independent of each other. Mutual information or transinformation Mutual information between the symbols xk and yj can be written as 11/22/2018 MVJCE

Mutual information is symmetrical with respect to its arguments. Self information: Mutual information is symmetrical with respect to its arguments. Entropy relations. 11/22/2018 MVJCE

Entropy Relations The entropy of X is represented by the circle in left The entropy of Y is represented by the circle in right. Overlap between the two circle is the mutual information and So that the remaining portions of H(X) and H(Y) represents respective equivocation 11/22/2018 MVJCE

The joint entropy H(X,Y) is the sum of H(X) and H(Y) expect for the fact that overlap is added twice. Also observe that 11/22/2018 MVJCE

Shannon theorem on channel capacity or coding theorem Coding is a technique by which the communication system can be made to transmit information with an arbitrary small probability of error provided information rate is less then or equal to channel capacity C. 11/22/2018 MVJCE

The Shannon theorem is split into two parts. positive statement Given a source of M equally likely messages with M>>1, which has an information rate R and Channel capacity C. If R<=C then there exists a coding technique such that output of the source may be transmitted with prob of error of receiving that message that can be made arbitrarily small. Negative statement: Given a source of M equally likely messages with M>>1 which has information rate R and channel capacity C. then if R>C probability of error of receiving the message is close to unity for every set of M transmiyted symbols. 11/22/2018 MVJCE

Shannon defines channel capacity of communication channel as the maximum value of transinformation. 11/22/2018 MVJCE

E= (Absolute redundancy)/H(X) Redundant source is one that produces dependent symbols. These redundant symbols do not convey any information. But there are required to facilitate error correction and detection. Redundancy of sequence of symbols can be measured by noting the amount by which the entropy has been reduced. The difference H(X)-H(Y/X) is the net reduction in the entropy and is called absolute redundancy. Its is also called as relative redundancy or simply redundancy (E). E= (Absolute redundancy)/H(X) R= actual rate of transinformation and C is channel capacity. 11/22/2018 MVJCE

Efficiency of the channel is given by 11/22/2018 MVJCE

Where Qi are the solutions of the matrix equations Muroga’s theorem: the channel capacity of a channel whose noise characteristics ,P(Y/X) is square and non singular the channel capacity is given by Where Qi are the solutions of the matrix equations Where Are the rows of entropies of P(Y/X) 11/22/2018 MVJCE

11/22/2018 MVJCE

Symmetric channels. A channel is said to be symmetric or uniform if the second and subsequent rows of the channel matrix are certain permutations of first row. The elements of second and subsequent rows are exactly the same as those of the First row except for the locations. 11/22/2018 MVJCE

Conditional entropy H(Y/X) of this cannel is given by The important property of P(Y/X) is that the sum of all elements in any row should add to 1. Conditional entropy H(Y/X) of this cannel is given by 11/22/2018 MVJCE

Mutual information of symmetric channels is given by H(Y) will be maximum if and only if all the received symbols Are equally probable as there are n symbols at the output Thus for symmetric channel we have 11/22/2018 MVJCE

And also the channel to be symmetric the CPM P(X/Y) should be The second and subsequent columns of CPM are permutations of the first columns 11/22/2018 MVJCE

UNIT 4 Channel coding Theorem 11/22/2018 MVJCE

Channel coding theorem Differential entropy Topics to be covered Channel coding theorem Differential entropy Mtual information for continuous ensembles Channel capacity theorem 11/22/2018

continuous random variable X. Differential entropy: Entropy H(X) of a continuous source can be defined as Where f(x) is the probability density function (p.d.f) of continuous random variable X. Consider an example : suppose X is a uniform random variable over the interval (0,2) hence 0 else where Hence using the equation of entropy we can write 11/22/2018

5) Average power limitation, with unidirectional distribution. Maximization of entropy: In practical systems, the sources for example radio,transmitters,are constrained to either average power or peak power limitations. Objective then is to maximize the entropy under such restrictions. The general constraints may be listed below: 5) Average power limitation, with unidirectional distribution. 11/22/2018

Mutual information of a continuous channel: Consider The mutual information isThe mutual information is The channel capacity is given by 11/22/2018

Amount of mutual information: Since the channel capacity is defined as Then it follows 11/22/2018

Capacity of band limited channels with AWGN and Average power: Limitation of signals: the shannon hartley law: Topic details The received signal will be composed of transmitted signal X and noise n Joint entropy at the transmitter end assuming signal and noise are independent 11/22/2018

Joint entropy in receiver end Is given by Since the received signal is Y=X+n From above two equations channel capacity in bits/second is 11/22/2018

If the additive noise is white and gaussian and has a power of N ,in bandwidth B hz then we have Further if the input signal is also limited power S over a bandwidthX,n are independent then it follows: For a given mean square value, the entropy will become a maximum .If the signal is gaussian and there fore entropy of the output is 11/22/2018

Using all above equations If n is an AWGN ,Y will be gaussian if and only if Xi also gaussian this implies Using all above equations ………….1 Equation 1 is called Shannon-Hartley law. 11/22/2018

Bandwidth SNR trade off Primary significance of shannon-hartley law is that it is possible to transmit over a channel of bandwidth B Hz perturbed by AWGN at a rate of C bits/sec with an arbitrarily small prob of error if the signal is encoded in such a manner that the smaples are all gaussian signals. Bandwidth SNR trade off 11/22/2018

From the figure we find the noise power over (-B,B) as or That is, the noise power is directly proportional to to band width B. Thus the noise power will be reduced by reducing the band width and vice versa. 11/22/2018

For wide band system where (S/N)>>1 Using Leads to The above equation predicts an exponential improvement in (S/N) Ratio with the band width for an ideal system. 11/22/2018

Capacity of a channel of infinite bandwidth.: Shannon- Hartley formula predicts that a noiseless gaussian channel with (S/N)=infinity has an infinite capacity. However the channel capacity does not become infinite when the bandwidth is made infinite 11/22/2018

Places an upper limit on channel capacity with increasing Band width. Accordingly Places an upper limit on channel capacity with increasing Band width. 11/22/2018

From the shannon formula becomes Band width efficiency: Shannon Limit.The average transmitted power is expressed as From the shannon formula becomes From which one can show (C/B) Is called “bandwidth efficiency” of the system. 11/22/2018

If C/B=1 then it follows that Eb=No If C/B=1 then it follows that Eb=No. This implies that the signal power equals the noise power. Suppose B=Bo for which S=N ,then That is the “the maximum signaling rate for a given S is 1.443 bits/sec/Hz In the bandwidth over which the signal power can be spread without its falling Below the noise level.” 11/22/2018

Is known as shannon’s limit for transmission capacity Topic details Is known as shannon’s limit for transmission capacity And the communication fail other wise 11/22/2018