Optimal Degree Distribution for LT Codes with Small Message Length Esa Hyytiä, Tuomas Tirronen, Jorma Virtamo IEEE INFOCOM mini-symposium 2007 1.

Slides:



Advertisements
Similar presentations
Jesper H. Sørensen, Toshiaki Koike-Akino, and Philip Orlik 2012 IEEE International Symposium on Information Theory Proceedings Rateless Feedback Codes.
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Performance analysis of LT codes with different degree distribution
Greedy Algorithms Amihood Amir Bar-Ilan University.
LT-AF Codes: LT Codes with Alternating Feedback Ali Talari and Nazanin Rahnavard Oklahoma State University IEEE ISIT (International Symposium on Information.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson.
11 - Markov Chains Jim Vallandingham.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Reliable System Design 2011 by: Amir M. Rahmani
Clustering short time series gene expression data Jason Ernst, Gerard J. Nau and Ziv Bar-Joseph BIOINFORMATICS, vol
1 Data Persistence in Large-scale Sensor Networks with Decentralized Fountain Codes Yunfeng Lin, Ben Liang, Baochun Li INFOCOM 2007.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
1 Cooperative Communications in Networks: Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland October, 2008.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Threshold Phenomena and Fountain Codes
Fountain Codes Amin Shokrollahi EPFL and Digital Fountain, Inc.
Sliding-Window Digital Fountain Codes for Streaming of Multimedia Contents Matta C.O. Bogino, Pasquale Cataldi, Marco Grangetto, Enrico Magli, Gabriella.
Probability Distributions
1 Distributed LT Codes Srinath Puducheri, Jörg Kliewer, and Thomas E. Fuja. Department of Electrical Engineering, University of Notre Dame, Notre Dame,
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
RAPTOR CODES AMIN SHOKROLLAHI DF Digital Fountain Technical Report.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Variable-Length Codes: Huffman Codes
Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland Joint work with Randy Cogill, University of Virginia.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Digital Fountain with Tornado Codes and LT Codes K. C. Yang.
Channel Polarization and Polar Codes
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Feng Lu Chuan Heng Foh, Jianfei Cai and Liang- Tien Chia Information Theory, ISIT IEEE International Symposium on LT Codes Decoding: Design.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Distributed Quality-of-Service Routing of Best Constrained Shortest Paths. Abdelhamid MELLOUK, Said HOCEINI, Farid BAGUENINE, Mustapha CHEURFA Computers.
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
Repairable Fountain Codes Megasthenis Asteris, Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY /5/221.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Rateless Codes with Optimum Intermediate Performance Ali Talari and Nazanin Rahnavard Oklahoma State University, USA IEEE GLOBECOM 2009 & IEEE TRANSACTIONS.
Zvi Kohavi and Niraj K. Jha 1 Memory, Definiteness, and Information Losslessness of Finite Automata.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
Shifted Codes Sachin Agarwal Deutsch Telekom A.G., Laboratories Ernst-Reuter-Platz Berlin Germany Joint work with Andrew Hagedorn and Ari Trachtenberg.
1 Call Admission Control for Multimedia Services in Mobile Cellular Networks : A Markov Decision Approach Jihyuk Choi; Taekyoung Kwon; Yanghee Choi; Naghshineh,
An Optimal Partial Decoding Algorithm for Rateless Codes Valerio Bioglio, Rossano Gaeta, Marco Grangetto, and Matteo Sereno Dipartimento di Informatica.
Chih-Ming Chen, Student Member, IEEE, Ying-ping Chen, Member, IEEE, Tzu-Ching Shen, and John K. Zao, Senior Member, IEEE Evolutionary Computation (CEC),
Channel Capacity.
User Cooperation via Rateless Coding Mahyar Shirvanimoghaddam, Yonghui Li, and Branka Vucetic The University of Sydney, Australia IEEE GLOBECOM 2012 &
Threshold Phenomena and Fountain Codes Amin Shokrollahi EPFL Joint work with M. Luby, R. Karp, O. Etesami.
BitTorrent and fountain codes: friends or foes? Salvatore Spoto, Rossano Gaeta, Marco Grangetto, Matteo Sereno Dipartimento di informatica Università di.
CprE 545 project proposal Long.  Introduction  Random linear code  LT-code  Application  Future work.
Stochastic Networks Conference, June 19-24, Connections between network coding and stochastic network theory Bruce Hajek Abstract: Randomly generated.
Great Theoretical Ideas in Computer Science.
UEP LT Codes with Intermediate Feedback Jesper H. Sørensen, Petar Popovski, and Jan Østergaard Aalborg University, Denmark IEEE COMMUNICATIONS LETTERS,
A Robust Luby Transform Encoding Pattern-Aware Symbol Packetization Algorithm for Video Streaming Over Wireless Network Dongju Lee and Hwangjun Song IEEE.
Multi-Edge Framework for Unequal Error Protecting LT Codes H. V. Beltr˜ao Neto, W. Henkel, V. C. da Rocha Jr. Jacobs University Bremen, Germany IEEE ITW(Information.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Pei-Chuan Tsai Chih-Ming Chen Ying-ping Chen WCCI 2012 IEEE World Congress on Computational Intelligence Sparse Degrees Analysis for LT Codes Optimization.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Review of Statistical Terms Population Sample Parameter Statistic.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Distributed Rateless Codes with UEP Property Ali Talari, Nazanin Rahnavard 2010 IEEE ISIT(International Symposium on Information Theory) & IEEE TRANSACTIONS.
OPTIMIZATION of GENERALIZED LT CODES for PROGRESSIVE IMAGE TRANSFER Suayb S. Arslan, Pamela C. Cosman and Laurence B. Milstein Department of Electrical.
Hongjie Zhu,Chao Zhang,Jianhua Lu Designing of Fountain Codes with Short Code-Length International Workshop on Signal Design and Its Applications in Communications,
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
1 Using Network Coding for Dependent Data Broadcasting in a Mobile Environment Chung-Hua Chu, De-Nian Yang and Ming-Syan Chen IEEE GLOBECOM 2007 Reporter.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
An improved LT encoding scheme with extended chain lengths
Copyright © Cengage Learning. All rights reserved.
Dusit Niyato, Student Member, IEEE Ekram Hossain, Senior Member, IEEE
Presentation transcript:

Optimal Degree Distribution for LT Codes with Small Message Length Esa Hyytiä, Tuomas Tirronen, Jorma Virtamo IEEE INFOCOM mini-symposium

Outlines Introduction Markov chain approach Combinatorial approach Simulation results Conclusion 2

Introduction Fountain codes provide an efficient way to transfer information over erasure channels. We give an exact performance analysis of a specific type of fountain codes, called LT codes, when the message length N is small. Two different approaches are developed – 1) In a Markov chain approach the state space explosion, even with reduction based on permutation isomorphism, limits the analysis to very short messages, N ≤ 4. – 2) An alternative combinatorial method allows recursive calculation of the probability of decoding after N received packets. The recursion can be solved symbolically for values of N ≤ 10 and numerically up to N ≈ 30. These two approaches for finding the optimal degree distribution are the main contribution of this paper. 3

Notation We denote the number of blocks (or input symbols) in the message by N and the degree distribution by ρ(d). The point probabilities by p j, i.e., p j = ρ(j). 4

Notation Let the random variable Z k denote the number of decoded input symbols after receiving the kth packet. – Initially, Z 0 = 0, and at the end Z k = N The random variable T denotes the number of packets needed for decoding the original message 5 Let P N denote the probability that a message consisting of N blocks is successfully decoded with exactly N received packets

Markov chain approach The decoding process can be studied as a Markov chain [8]. From the receiver’s point of view, the set of received and either partially or fully decoded packets denotes a state. State transition probabilities depend on the arrival probabilities of specific packets, which in turn depend on the degree distribution used in the encoding. The process ends when it has reached the absorbing state consisting of the original blocks. For example consider a file consisting of three blocks a, b, and c. When a receiver has already received a packet consisting of block a and another one of blocks b and c, the process is in state {a, bc}. The state {a, b, c} is the absorbing state. 6 [8] S. M. Ross, Introduction to Probability Models, 7th ed. Academic Press, 2000.

Markov chain approach The number of possible distinct packets is 2 N − 1 (i.e. the number of the subsets of a set with N elements, excluding the empty set). The number of different sets of received distinct packets is then 2 2N−1 (including the initial state). We call this the number of raw states. For N = 3 this number is 128, for N = 4 it is 32768, and the number grows very fast with N. 7

Markov chain approach Reduction scheme – The number of states can be brought down to 12 for N=3 and to 192 for N=4. – For N=5, the reduced state space has states. Macro state – In Fig. 1, the four states in the box, {ab, bc}, {ab, ac, bc}, {ab, ac, abc}, and {ab, ac, bc, abc}, constitute such a macro state. – The corresponding reduced state space sizes for the cases N = 3,..., 5 are 9, 87 and The 12 different states of the case N=3 are shown in Fig. 1 as the darker blocks. The lighter blocks represent intermediate states that are immediately reduced. 8

9 Fig. 1. State transitions in the decoding Markov chain for n = 3 blocks.

We have specifically considered two optimization criteria, called MinAvg and MaxPr – 1) MinAvg : minimize the mean number of packets needed to successfully decoded the message – 2) MaxPr : maximize the probability of successful decoding after reception of N packets 10

P : the state transition probability matrix P for the Markov process with the reduced state space can be constructed easily, e.g., by using Mathematica [9] Q : the transition matrix between transient states R : transitions from transient states to the absorbing states I : the identity matrix corresponding to the absorbing states. 11 [9] W. R. Inc., “Mathematica,”

12 The fundamental matrix is well-defined with all elements positive and represents all possible transition sequences in the transient states without going to the absorbing one. A specific element m ij in M tells the mean number of visits in state j before absorption when starting in state i. Using the fundamental matrix, average number of steps can be calculated as follows We can calculate the probability of success P N after receiving N packets. This is given by the probability of the absorbing state after N steps

13 For N=3 the reduced state space of the Markov chain consists of 9 states (with the additional state aggregation). Using Mathematica, or directly by inspection from Fig. 1, we can find the transition probability matrix

Using (1), the optimal weights minimizing the mean number of steps to decode the message Using (2) one obtains an expression for P 3 the probability of full decoding after 3 received packets 14

15

Both uniform and degree-1 distributions perform rather poorly, the degree-1 distribution being worst. In contrast, the binomial distribution performs reasonably well, next followed by the soliton distribution (in fact, these distributions are similar). 16

Combinatorial approach Recursive algorithm In order to calculate we condition this probability on n−m of the n received packets having degree 1, which happens with a probability equal to the (n − m)th point probability of the binomial distribution Bin(n, p1). For successful decoding one must necessarily have n − m ≥ 1, otherwise the decoding does not get started. Because the successful decoding after n received packets requires that no packets are wasted there must be no duplicates and all the n − m degree-1 packets must be distinct. This happens with the probability (n − 1)!/m! n n−m−1. 17

Given the n − m distinct degree-1 packets, we have a remaining decoding problem for the m other packets that originally are surely at least of degree 2, but whose degrees may be modified when the n−m degree-1 packets are removed from the other packets in the decoding process, giving 18

19

20 Simulation results

21 Fig. 2. Optimal degree distribution for N = 16 P 16 = p 1 = , p 2 = , p 4 = , p 8 = and p 16 =

22 Fig. 3. P 3 as a function of p 2 and p 3 with p 1 = 1−p 2 −p 3. The principal directions at the maximum point are also shown.

23 Fig. 4. Maximized success probability P N (left) and the relative overhead for N =1,..., 20 packets. For such small values of N the performance of LT codes is rather poor. The highest relative overhead 44.6% occurs at N = 9.

Conclusions In this paper we have focused on optimizing the degree distribution of LT codes when the message length N is small. The decoding process constitutes a Markov chain which allows determining the optimal degree distribution. An alternative combinatorial approach leads to recursive equations for the success probability (recursion on N). 24

References [4] E. Hyytiä, T. Tirronen, and J. Virtamo, “Optimizing the degree distribution of LT codes with an importance sampling approach,” in RESIM 2006, 6 th International Workshop on Rare Event Simulation, Bamberg, Germany, Oct [5] M. Luby, “LT Codes,” in Proceedings of The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002, pp. 271–282. [8] S. M. Ross, Introduction to Probability Models, 7th ed. Academic Press, [9] W. R. Inc., “Mathematica,” 25