§4 Continuous source and Gaussian channel

Slides:



Advertisements
Similar presentations
Topics discussed in this section:
Advertisements

Joint and marginal distribution functions For any two random variables X and Y defined on the same sample space, the joint c.d.f. is For an example, see.
Information Theory EE322 Al-Sanie.
Chain Rules for Entropy
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Lab 2 COMMUNICATION TECHNOLOGY II. Capacity of a System The bit rate of a system increases with an increase in the number of signal levels we use to denote.
Chapter 6 Information Theory
Background Knowledge Brief Review on Counting,Counting, Probability,Probability, Statistics,Statistics, I. TheoryI. Theory.
Continuous Random Variables and Probability Distributions
Today Today: Chapter 5 Reading: –Chapter 5 (not 5.12) –Suggested problems: 5.1, 5.2, 5.3, 5.15, 5.25, 5.33, 5.38, 5.47, 5.53, 5.62.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Machine Learning CMPT 726 Simon Fraser University
Continuous Random Variables and Probability Distributions
Compression with Side Information using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford University Data Compression Conference.
Noise, Information Theory, and Entropy
X= {x 0, x 1,….,x J-1 } Y= {y 0, y 1, ….,y K-1 } Channel Finite set of input (X= {x 0, x 1,….,x J-1 }), and output (Y= {y 0, y 1,….,y K-1 }) alphabet.
Lecture 28 Dr. MUMTAZ AHMED MTH 161: Introduction To Statistics.
Noise, Information Theory, and Entropy
1 Statistical NLP: Lecture 5 Mathematical Foundations II: Information Theory.
§1 Entropy and mutual information
2. Mathematical Foundations
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 2nd Mathematical Preparations (1): Probability and statistics Kazuyuki Tanaka Graduate.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Gaussian Channel. Introduction The most important continuous alphabet channel is the Gaussian channel depicted in Figure. This is a time-discrete channel.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Course Review for Final ECE460 Spring, Common Fourier Transform Pairs 2.
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Continuous Distributions The Uniform distribution from a to b.
Channel Capacity.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory I AI-lab
§2 Discrete memoryless channels and their capacity function
Information Theory Basics What is information theory? A way to quantify information A lot of the theory comes from two worlds Channel.
1 Continuous Probability Distributions Continuous Random Variables & Probability Distributions Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering.
Physics Fluctuomatics/Applied Stochastic Process (Tohoku University) 1 Physical Fluctuomatics Applied Stochastic Process 3rd Random variable, probability.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
IE 300, Fall 2012 Richard Sowers IESE. 8/30/2012 Goals: Rules of Probability Counting Equally likely Some examples.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 3rd Random variable, probability distribution and probability density function Kazuyuki.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Channel capacity A very important consideration in data communications is how fast we can send data, in bits per second, over a channel.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Lecture 3 Appendix 1 Computation of the conditional entropy.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory II AI-lab
Chapter 4 Multivariate Normal Distribution. 4.1 Random Vector Random Variable Random Vector X X 1, , X p are random variables.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
디지털통신 Random Process 임 민 중 동국대학교 정보통신공학과 1.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by J.W. Ha Biointelligence Laboratory, Seoul National University.
Statistical methods in NLP Course 2 Diana Trandab ă ț
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Statistical methods in NLP Course 2
Introduction to Information theory
Cumulative distribution functions and expected values
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
The distribution function F(x)
Some Rules for Expectation
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Subject Name: Information Theory Coding Subject Code: 10EC55
Example Suppose X ~ Uniform(2, 4). Let . Find .
Nyquist and Shannon Capacity
Sampling Theorems- Nyquist Theorem and Shannon-Hartley Theorem
6.3 Sampling Distributions
Topics discussed in this section:
Moments of Random Variables
Presentation transcript:

§4 Continuous source and Gaussian channel

1. Differential entropy §4.1 Continuous source Definition: Let X be a random variable with cumulative distribution function F(x) = Pr(X≤x). If F(x) is continuous, the random variable is said to be continuous. Let when the derivative is defined. If , then p(x) is called the probability density function for X. The set where p(x) > 0 is called the support set of X.

§4.1 Continuous source 1. Differential entropy Definition: The differential entropy h(X) of a continuous random variable X with a density p(x) is defined as where S is the support set of the random variable.

1. Differential entropy §4.1 Continuous source Example 4.1.1 (X~N (m,σ2), Normal distribution) please calculate the differential entropy.

§4.1 Continuous source 1. Differential entropy Definition: The differential entropy of a set X,Y of random variables with density p(xy) is defined as If X,Y have a joint density p(xy), we can define the conditional differential entropy h(X|Y) as

2. Properties of differential entropy §4.1 Continuous source 2. Properties of differential entropy 1) h (XY) = h(X) + h(Y|X) = h(Y) + h(X|Y)

2. Properties of differential entropy §4.1 Continuous source 2. Properties of differential entropy 2) h(X) can be negative. Example 4.1.2 Consider a random variable distributed uniformly from a to b. If (b-a)<1, h(X) < 0.

2. Properties of differential entropy §4.1 Continuous source 2. Properties of differential entropy 3) h(X) is a convex function of the input probabilities p(x), it has the maximum. Theorem 4.1 If the peak power of the random variable X is restricted, the maximizing distribution is the uniform distribution. If the average power of the random variable X is restricted, the maximizing distribution is the normal distribution.

2. Properties of differential entropy §4.1 Continuous source 2. Properties of differential entropy 4) let Y=g(X), the differential entropy of Y may be different with h(X). Example 4.1.3 Let X is a random variable distributed uniformly from -1 to 1, and Y=2X. h(X)=? h(Y)=? Theorem 4.2 Theorem 4.3

Review Differential entropy Chain rule of differential entropy KeyWords: Differential entropy Chain rule of differential entropy Conditioning reduces entropy Independent bound of differential entropy may be negative convex function transformative

a) h (XY) = h(X) + h(Y|X) = h(Y) + h(X|Y) Homework Prove the following conclusions: a) h (XY) = h(X) + h(Y|X) = h(Y) + h(X|Y) 2009.03.23到此,次12,时间正好

§4 Continuous source and Gaussian channel

1.The model of Gaussian channel X Z Y Normal, mean 0, variance σz2 Y=X+Z X and Z are independent

I(X;Y) = h(Y) – h(Y|X) = h(Y) – h(Z|X) = h(Y) – h(Z) §4.2 Gaussian channel 2. Average mutual information I(X;Y) = h(Y) – h(Y|X) (Y=X+Z) = h(Y) – h(Z|X) = h(Y) – h(Z) Example 4.2.1 Let X~N(0,σx2), Y~N(0,σx2+σz2),

3. The channel capacity I(X;Y) = h(Y) – h(Z) §4.2 Gaussian channel Definition: The information capacity of the Gaussian channel with power constraint P is I(X;Y) = h(Y) – h(Z)

§4.2 Gaussian channel 3. The channel capacity

3. The channel capacity §4.2 Gaussian channel (bits/sample ) Thinking about the band-limited channels, transmission bandwidth is W, (bits/sample ) There are 2W samples per second,

§4.2 Gaussian channel 4. Shannon’s formula (bits/sec) Shannon’s famous expression for the capacity of a band-limited, power-limited Gaussian channel.

4. Shannon’s formula 1)Ct、W、SNR can be interchanged. 2) §4.2 Gaussian channel 4. Shannon’s formula Remarks: 1)Ct、W、SNR can be interchanged. 2)

4. Shannon’s formula 3) shannon limit Ct (bps) W §4.2 Gaussian channel For infinite bandwidth channels Ct (bps) W

4. Shannon’s formula §4.2 Gaussian channel Let Eb is the energy per bit, then As

Information rate of Gaussian channel Review KeyWords: Information rate of Gaussian channel Capacity of Gaussian channel Shannon’s fomula (Band limited, power limited) Shannon limit

Homework 1. In image transmission, there are 2.25*106 pixels per frame. Reproducing image needs 4 bits per pixel (assume that each bit has equal probability to choose ‘0’ and ‘1’ ). Compute the channel bandwidth needed for transmitting 30 frames image per second . (P/N = 30dB) 2. Consider a power-limited Gaussian channel , bandwidth is 3kHz, and (P + N)/N = 10dB. (1) Compute the maximum rate of this channel. (bps) (2) If SNR decreases to 5 dB, give the channel bandwidth with the same maximum rate.