Download presentation
1
The Channel and Mutual Information
Chapter 7 The Channel and Mutual Information
2
Information Through a Channel
symbols can’t be swallowed A a1 : : aq b1 : : bs B alphabet of symbols sent alphabet of symbols received P(bj|ai) or randomly generated For example, in an error correcting code over a noisy channel, s ≥ q. If two symbols sent are indistinguishable when received, s < q. Characterize a stationary channel by a matrix of conditional probabilities: Compare: noise (randomness) versus distortion (permutation) received row column sent s Pi,j Pi,j = P(bj | ai) P = q 7.1, 7.2, 7.3
3
[p(a1) … p(aq)]P = [p(b1) … p(bs)]
For p(ai) = probability of source symbols, let p(bj) = probability of being received [p(a1) … p(aq)]P = [p(b1) … p(bs)] no noise: Pi,j = I; p(bj) = p(aj) all noise: Pi,j = 1/s; p(bj) = 1/s The probability that ai was sent and bj was received is: Baye’s Theorem Intuition: given b_j, some a_i must have been sent P(ai, bj) = p(ai) ∙ P(bj | ai) = p(bj) ∙ P(ai | bj). [coincidental probability] So if p(bj) ≠ 0, the backwards conditional probabilities are: 7.1, 7.2, 7.3
4
Binary symmetric Channel
P0,0 a = 0 b = 0 p(a = 0) = p p(a = 1) = 1 − p P0,0 = P1,1 = P P0,1 = P1,0 = Q P0,1 P1,0 P1,1 a = 1 b = 1 a = 0 a = 1 p(b = 0) p(b = 1) P Q Q P ( pP + (1 − p)Q pQ + (1 − p)P ) (p 1−p) = 7.4
5
Backwards conditional probabilities No noise All noise
P(a = 0 | b = 0) = Pp 1 p Pp + Q(1−p) P(a = 1 | b = 0) Q(1−p) 1 − p P(a = 0 | b = 1) Qp Qp + P(1−p) P(a = 1 | b = 1) P(1−p) Is this the only situation where the forwards and backwards matrix is the same? P = 1 Q = 0 P = Q = ½ If p = 1 − p = ½ (equiprobable) then: P(a = 1 | b = 0) = P(a = 0 | b = 1) = Q P(a = 0 | b = 0) = P(a = 1 | b = 1) = P P Q Q P 7.4
6
System Entropies H(A) Input entropy H(B) Output entropy
H(A| B) H(B| A) H(A) Input entropy H(B) Output entropy condition on bj average over all bj Similarly The information loss in the channel, called equivocation (or noise entropy). The average uncertainty about the symbol sent or received. 7.5
7
H(A, B) = H(A, B) Joint Entropy Define : H(B | A) H(A) H(A | B) H(B)
Intuition: taking snapshots A B Define : H(A| B) H(B| A) H(B | A) H(A) The signal H(A) is the information you want. The signal and noise H(B) is the information you have. H(B|A) is the information you don’t want (from noise in the channel). H(A | B) H(A, B) = H(B) 7.5
8
a priori a posteriori H(A, B) joint Mutual Information P(ai | bj)
H(B| A) P(ai | bj) p(ai) I(A; B) The amount of information they are sharing corresponds to Information gain upon receiving bj : I(ai) − I(ai | bj) . shared I(A; B) is the reduction in the uncertainty of A due to the knowledge of B. By symmetry: If ai and bj are independent (all noise), then P(ai , bj) = p(ai) ∙ p(bj) and hence P(ai | bj) = p(ai) I(ai ; bj) = 0. No information gained in channel. 7.6
9
= H(A) + H(B) − H(A, B) ≥ 0 H(A, B) H(A) + H(B)
Average over all ai: Similarly: from symmetry By Gibbs I(A; B) ≥ 0. Equality only if P(ai, bj) = p(ai)∙p(bj) [independence]. Can derive these inequalities from the figure on the previous slide. = H(A) + H(B) − H(A, B) ≥ 0 H(A, B) H(A) + H(B) We know H(A, B) = H(A) + H(B | A) = H(B) + H(A | B). I(A ; B) = H(A) − H(A | B) = H(B) − H(B | A) ≥ 0 H(A | B) H(A) and H(B | A) H(B). 7.6
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.