Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mutual Information and Channel Capacity Multimedia Security.

Similar presentations


Presentation on theme: "Mutual Information and Channel Capacity Multimedia Security."— Presentation transcript:

1 Mutual Information and Channel Capacity Multimedia Security

2 2 Information Source Source symbols Source probability Entropy

3 3 Source Encoder E(.)

4 4 Mutual Information : I(B;A) = I(A;B) = H(B) - H(B|A) = H(A) - H(A|B) Information Source A Source Encoder E(.) observer A observer B

5 5 Mutual Information S’pose we represent the information source and the encoder as “black boxes” and station two perfect observes at the scene to watch what happens. The first observer observes the symbols output from the source A, while the second observer watches the code symbols output from the encoder “E”.

6 6 We assume that the first observer has perfect knowledge of source A and symbol probabilities P A and the second observer has equally perfect knowledge of code alphabet B and codeword probabilities P B. Neither observer, however, has any knowledge whatsoever of the other observer’s black box.

7 7 Now s’pose each time observer B observes a codeword he asks observer A what symbol had been sent by the information source. How much information does observer B obtain from observer A? If the answer to this is “None”, then all of the information presented to the encoder passed through it to reach observer B and the encoder was information lossless.

8 8 On the other hand, if observer A’s report occasionally surprises observer B, then some information was lost in the encoding process. A’s report then serves to decrease the uncertainty observer B has concerning the symbols being emitted by black box “E”. The reduction in uncertainty about B conveyed by the observation A is called the mutual information, I(B;A).

9 9 The information presented to observer B by his observation is merely the entropy H(B). If the observer B observes symbol b (  B) and then learns from his partner that the source symbol was a, observer A’s report conveys information

10 10 and, average over the source of all observations, the average information conveyed by A’s report will be The amount by which B’s uncertainty is therefore reduced is

11 11 Since I(B;A) = H(B) - H(B|A) and H(B|A) ≧ 0 then I(B;A) ≦ H(B) That is, the mutual information is upper bounded by the entropy of the source encoder.

12 12 I(B;A) = H(B) iff H(B|A)=0 The conditional entropy is a measure of how much information loss occurs in the encoding process, and if it is equal to zero, then the encoder is information lossless. w.l.o.g., the encoder can be viewed as a channel in which the Source alphabet is the same as the codeword alphabet, and the encoding function behaves like the symbol transition map.

13 13 : transition probability of the channel 0 1 where : bit-error probability.

14 14 Each time the source (transmitter) sends a symbol, it is said to use the channel. The channel capacity is the maximum average information that can be sent per channel use. Notice that the mutual information is a function of the probability distribution of A. By changing P a, we get different I(A;B).

15 15 For a fixed transition probability matrix, a change in P a also results in a different output symbol distribution P B. The maximum mutual information achieved for a given transition probability matrix [a fixed channel characteristics] is the channel capacity

16 16 The relative entropy (or Kullback-Leibler distance) between two probability mass function p(x) and q(x) is defined as The mutual information I(X;Y) is the relative entropy between the joint distribution and the product distribution:


Download ppt "Mutual Information and Channel Capacity Multimedia Security."

Similar presentations


Ads by Google