Presentation is loading. Please wait.

Presentation is loading. Please wait.

José Vieira Information Theory 2010 Information Theory MAP-Tele José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade.

Similar presentations


Presentation on theme: "José Vieira Information Theory 2010 Information Theory MAP-Tele José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade."— Presentation transcript:

1 José Vieira Information Theory 2010 Information Theory MAP-Tele José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade de Aveiro jnvieira@ua.pt Carnegie Mellon University Accredited Course

2 José Vieira Information Theory 2010 Part III Advanced Coding Techniques Carnegie Mellon University Accredited Course José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade de Aveiro jnvieira@ua.pt

3 José Vieira Information Theory 2010 Objectives To introduce the concept of rateless codes for erasure channels and the concept of digital fountains To give an introduction to the first rateless codes and their design Illustrative applications – Network coding – Distributed storage

4 José Vieira Information Theory 2010 Part III – Outline The Binary Erasure Channel (BEC) Codes for the BEC Fountain codes – Rateless codes – The LT code – Design a rateless code – The rank of random binary matrices Applications of Fountain codes – Network coding – Distributed storage

5 José Vieira Information Theory 2010 The Binary Erasure Channel Introduced by Elias in 1955 and regarded as a theoretical model Internet changed this notion 40 years later On the internet, due to router congestion and CRC errors, sent packets may not reach the destination This packet losses can be regarded as erasures

6 José Vieira Information Theory 2010 The Binary Erasure Channel e – erasure  – erasure probability Erasure channel Capacity: C= 1-  Intuitive interpretation: since a proportion  of the bits are lost in the channel, we can recover (at most) a proportion (1-  ) of the bits.

7 José Vieira Information Theory 2010 Classical solutions When a packet did not reach the destination the receiver sends back a requests for retransmission Alternatively, the receiver can send back acknowledgement messages for each successfully received packet. The sender keeps track of the missing packets and retransmits them until all have been acknowledged

8 José Vieira Information Theory 2010 Classical solutions Both solutions guarantees the correct delivery of all the packets regardless of the rate of packet losses However, if the rate of packet losses is high, both of these schemes are very inefficient The full capacity of the channel is not reached According to Shannon theory, the feedback channel is not necessary

9 José Vieira Information Theory 2010 Broadcast channel with erasures On a broadcast channel with erasures, the repetition schemes are very inefficient, and can lead to network congestion An appropriate Forward Error Correction (FEC) Code should achieve the theoretic channel capacity without feedback channel With classical codes the design of the fixed rate R=K/N, should be performed to worst case conditions This restriction makes this coding inefficient also

10 José Vieira Information Theory 2010 Reed-Solomon codes for broadcast channels with erasures An ( N, K ) Reed-Solomon code correctly decode the K symbols of the message from K codeword symbols However, Reed-Solomon codes are only pratical for small values of N and K The coding / decoding cost is of order

11 José Vieira Information Theory 2010 Variable rate codes If the error probability of a BEC varies, the ideal code should allow on the fly variable encoding rate R=K/N With Reed-Solomon codes it is not possible to change R on the fly Michael Luby (2002) invented a rateless code with this propriety

12 José Vieira Information Theory 2010 Fountain Codes This code can generate a potentially infinite number of codewords Fountain codes are near optimal for every erasure channel, despite the probability of erasure  The message m with K symbols can be decoded from K´ received codewords, with K´ a little larger than K

13 José Vieira Information Theory 2010 Fountain Codes Consider a message m with K symbols To generate the n th codeword symbol the encoder chooses the number d of symbols to combine from a degree distribution  Then the encoder chooses d symbols at random from m and perform the xor sum

14 José Vieira Information Theory 2010 Fountain Codes The growing encoding matrix G is formed on the fly, a row at a time The rows of G should be transmitted to the receiver as side information It is possible to use a seed for a random number generator to generate the same encoding rows of G at the receiver

15 José Vieira Information Theory 2010 Fountain Codes The transmitted G and the received G ( J ) generator matrix with J={1,3,6,7,8,10,11,15,16} G G(J)G(J)

16 José Vieira Information Theory 2010 Fountain Codes If N<K the decoder does not have enough codeword symbols to recover the original information If N≥K and G has an K  K submatrix with inverse, then the receiver can recover the original information. It is possible to use Gaussian elimination and recover the message

17 José Vieira Information Theory 2010 Fountain Codes If it is possible to find an invertible K  K submatrix in the received N  K matrix, then the solution is unique As the matrix is generated at random and we can not predict the columns that we are going to received, the question is: What is the probability of a K  K random binary matrix being invertible?

18 José Vieira Information Theory 2010 Random matrices Linear independency A set of K vectors v n in some vector space of dim K is linearly independent if only with all the  n =0

19 José Vieira Information Theory 2010 Probability of a K×K random binary matrix G being invertible If we have only one vector the probability of being linear independent is the probability of being different from zero With two vectors we have the probability of the second vector being different from zero and different from the first one For K vectors the probability of all vectors being linear independent

20 José Vieira Information Theory 2010 Probability of a random binary matrix G being invertible If the number N of vectors is greater than K, with E=N-K (excess), what is the probability (1-  ) that there is an invertible K  K submatrix in G? Where  is probability of failure and E is the number of redundant packets

21 José Vieira Information Theory 2010 Probability of a random binary matrix G being invertible The number of packets N=K+E in order to have (on average) a guarantee of decoding of (1-  ) is So, an excess of E packets increases the probability of success to at least

22 José Vieira Information Theory 2010 Computational cost The encoding cost is K/2 symbol operations by codeword The decoding cost has two components – The matrix inversion with K 3 operations by Gaussian elimination – The application of matrix inverse to the received symbols which costs K 2 /2 When the value of K increases, random linear fountain codes approximate to the Shannon limit Problem to solve: find a coding and decoding technique with lower cost, preferably linear

23 José Vieira Information Theory 2010 Sparse random matrices The coding and decoding computational cost can be reduced if the coding matrix G is sparse Even for matrices with a small average number of ones per row is possible to find an invertible coding matrix

24 José Vieira Information Theory 2010 Balls and Bins Suppose that we throw N balls to K bins at random Question: After throwing N=K balls what fraction of the bins is empty? Answer: The probability that a ball hits one of the K bins is 1/K. The complement is (1-1/K), and the probability that a bin is empty after N balls is For N=K the probability of a certain bin is empty is 1/e and the fraction of empty bins would be 1/e also

25 José Vieira Information Theory 2010 Balls and Bins After throwing N balls the expected number of empty bins is This expected number  of empty bins is small for large N. So we can say that the probability of all bins have a ball is given by (1-  ) only if

26 José Vieira Information Theory 2010 The LT code Encoder Consider a message m with K elements 1.Choose at random the degree d n of the codeword from a degree distribution  (d). 2.Choose at random and uniformly, d n distinct input symbols and sum them using the XOR operation. This encoding defines a sparse and irregular encoding matrix

27 José Vieira Information Theory 2010 The LT code Decoder The decoder must recover m form c=Gm supposing G known If some of the codeword symbols are equal to one of the message symbols, then it is possible to decode by the following algorithm 1.Find a codeword c n with degree one. If it is not possible to find one halt and report fail 2.Set m i =c n 3.Add m i (with XOR) to all codewords c n that are connected to m i 4.Remove all the edges connected to m i 5.Repeat 1 to 4 until all m i are decoded

28 José Vieira Information Theory 2010 Decoding example – 1

29 José Vieira Information Theory 2010 Decoding example – 2

30 José Vieira Information Theory 2010 Decoding example – 3

31 José Vieira Information Theory 2010 Decoding example – 4

32 José Vieira Information Theory 2010 The Degree Distribution Each codeword is a linear combination of d symbols from the message m The degree d is chosen at random from a degree distribution  (d) There are two design conflicts: – The degree of some codewords should be high to guarantee that all the message symbols are covered – The degree of some codewords should be low in order to start the decoding process and keep going

33 José Vieira Information Theory 2010 Soliton distribution Can we design a degree distribution that guarantees the optimal Shannon limit of decoding the K symbols of the message after K received codewords? We want a distribution that on average guarantees that just one message symbol is uncovered at each iteration Such a distribution is the Soliton

34 José Vieira Information Theory 2010 Soliton distribution Step 0 – The expected number of codeword symbols of degree one at step zero should be 1 Step 1 – One of the message symbols is decoded and it lower the degree of some of the codeword symbols. – At the end of step 1, at most one degree 2 codeword should be connected to the decoded message symbol in order to decrease its degree to one and the process continues Step n – Continue the process checking at each step that one of the codeword symbols has degree one

35 José Vieira Information Theory 2010 Soliton distribution The mean degree of this distribution is

36 José Vieira Information Theory 2010 Soliton distribution With the Soliton distribution the expected number of edges from each message symbol will be log e K codewords Message symbols

37 José Vieira Information Theory 2010 Soliton distribution The decoding of m 0 from c 0 causes the degree of the connected codewords to decrease by 1 codewords Message symbols

38 José Vieira Information Theory 2010 Let h t (d) be the expected number of codewords of degree d after the t th iteration of the algorithm Step 0 Step 1 Soliton distribution Expected number of codewords with degree d that maintained their degree after step 0 Expected number of codewords with degree d+1 that reduced their degree after step 0 Probability of a degree d codeword had an edge to a message symbol

39 José Vieira Information Theory 2010 Soliton distribution Step 1 (cont.)

40 José Vieira Information Theory 2010 Soliton distribution Step 2 We have showed (for the first 3 steps) that the expected number of degree 1 codeword symbols at each step will be 1 if we use the Soliton distribution.

41 José Vieira Information Theory 2010 Soliton distribution Theorem: Suppose that the expected degree distribution holds after t-1 iterations, for all t. Then, h t (d) satisfies the two conditions

42 José Vieira Information Theory 2010 Robust Soliton Due to the random fluctuations around the mean behaviour, the Soliton distribution behaves poorly in practice. If in one of the steps, there is not a degree one codeword, the decoding process stops The Robust Soliton distribution tries to solve this problem by introducing two new parameters, c and , to obtain a expected number of degree one codeword symbols at each step of instead of 1/K

43 José Vieira Information Theory 2010 Robust Soliton Luby proved that there exists a value of c and , given N received codeword symbols the algorithm recover the K message symbols with probability (1-  )

44 José Vieira Information Theory 2010 Comparing the distributions The Robust Soliton does not have codewords of degree larger than K/S

45 José Vieira Information Theory 2010 Performance of Fountain Codes – Online Code distibution from Maymounkov

46 José Vieira Information Theory 2010 Performance of Fountain Codes – Soliton distribution

47 José Vieira Information Theory 2010 Performance of Fountain Codes – Robust Soliton distribution

48 José Vieira Information Theory 2010 Applications The same algorithms and coding techniques can be adapted to other applications such as – Network coding – Distributed storage

49 José Vieira Information Theory 2010 Network Coding On traditional networks each peace of information is transmitted by using time sharing In the figure at right, the wireless station C received the packets P 1 and P 2 almost simultaneously Then he sends the two packets using different slots of time

50 José Vieira Information Theory 2010 Network Coding With network coding, the wireless station C sends the sum of the two packets As each of the nodes A and B already have half of the information, each of them can recover P 1 and P 2

51 José Vieira Information Theory 2010 Network Coding – multicast Consider the following network with 6 nodes The nodes C and D are just routers Suppose that the transmitters T1 and T2 need to send a packet to the two receivers at nodes E and F

52 José Vieira Information Theory 2010 Network Coding – multicast The router C is not able to transmit both packets at the same time and drops packet 2 The receiver R1 did not received the packet P 2

53 José Vieira Information Theory 2010 Network Coding – multicast With network coding, the two packets are added at node C using the XOR operator Now both receivers had enough information to recover both packets P 1 and P 2

54 José Vieira Information Theory 2010 Network Coding Theorem (from Fragouli 2006) Assume that the source rate are such that, without network coding, the network can support each receiver in isolation (i.e. each receiver can decode all sources when it is the only receiver at the network). With an appropriate choice of linear coding coefficients, the network can support all receivers simultaneously.

55 José Vieira Information Theory 2010 Distributed Storage Consider a data file m with K symbols Perform N linear combinations c n with the symbols of m Store the data symbols c n on several servers To recover the data file we have to receive a little more than K data symbols c n from the servers to recover the original data

56 José Vieira Information Theory 2010 Problem Consider a RAID 5 storage system with 4 disks as shown in the figure below Compare this Raid 5 system with a four disks storage system using a Digital Fountain Code


Download ppt "José Vieira Information Theory 2010 Information Theory MAP-Tele José Vieira IEETA Departamento de Electrónica, Telecomunicações e Informática Universidade."

Similar presentations


Ads by Google