Download presentation
Presentation is loading. Please wait.
Published byLester Lester Modified over 9 years ago
1
Chih-Ming Chen, Student Member, IEEE, Ying-ping Chen, Member, IEEE, Tzu-Ching Shen, and John K. Zao, Senior Member, IEEE Evolutionary Computation (CEC), 2010 IEEE Congress on On the Optimization of Degree Distributions in LT Code with Covariance Matrix Adaptation Evolution Strategy
2
Outline Introduction Optimization method Decision Variables Objectives Experiments and results
3
Introduction LT codes An appropriate degree distribution : soliton distribution Researchers started to optimize the degree distribution [5] [6] Only focus on the parameters of soliton distribution We directly consider the degree distribution itself as our decision variables [5] E. A. Bodine and M. K. Cheng, “Characterization of luby transform codes with small message size for low-latency decoding,” in IEEE International Conference on Communications (ICC ‘08), 2008, pp. 1195-1199. [6] E. Hyytia, T. Tirronen, and J. Virtamo, “Optimal degree distribution for LT codes with small message length,” in Proceedings of the 26th IEEE International Conference on Computer Communications (INFOCOM 2007), 2007, pp. 2576-V2580.
4
Raptor codes Integrating LT code with a pre-coding layer Requiring a degree distribution, called weakened LT Several instances were given in [9] for certain particular sizes of source symbols. We demonstrate the use of optimization techniques proposed in evolutionary computation for generating degree distributions of different, desired properties. Introduction [9] A. Shokrollahi, ’’Raptor codes, ’’ IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2551-2567, 2006
5
In this paper Utilizing evolutionary computation techniques to optimize the degree distribution for LT code. Demonstrating the feasibility of customizing degree distributions for different purposes. Particularly, we adopt the covariance matrix adaptation evolution strategy (CMA-ES) [10] To directly optimize degree distributions : Reducing the overhead Lowering the failure rate. The experimental results are remarkably promising
6
LT code : Soliton distribution After k processing step, the source data can be ideally recovered. The overhead = K/k denotes the performance of LT code k : the number of source symbols K: the number of encoding symbols received by receivers
7
LT code : Robust soliton distribution Robust soliton distribution can ensure that only encoding symbols are required with a successful probability at least
8
LT code
9
Optimization method Evolution strategies (ES) To evolve strategic parameters as well as decision variables Well-known to be quite capable of dealing with continuous optimization problems. Using natural problem-dependent representations, and primarily mutation and selection, as search operators. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met.
10
ES Repeated interplay of variation (via mutation and recombination) and selection In each generation (iteration) new individuals (candidate solutions, denoted as x ) are generated by variation And then some individuals are selected for the next generation based on their fitness or objective function value Like this, over the generation sequence, individuals with better and better -values are generated. (1+1)-ES
11
Covariance Matrix Adaptation Evolution Strategy In an evolution strategy, new candidate solutions are sampled according to a multivariate normal distribution. Pairwise dependencies between the variables in multivariate normal distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. Fewer assumptions on the nature of the underlying objective function are made. CMA-ES
13
Decision Variables Using the degree distribution to form a real-number vector In the evaluation phase, a real-number vector of arbitrary values can be interpreted as a probability distribution. We usually do not need a non-zero probability on every single degree We choose some degrees called tags to form the vector v(i) of decision variables
14
Objectives We try to use two indicators to evaluate degree distributions for LT code The efficiency of the LT code with the optimized degree distribution ε denotes the expected rate of overhead to transmit data. This objective is to obtain some degree distribution for a specific k with the smallest ε. We provide infinite encoding symbols, in the form of a stream of encoding symbols, to simulate the decoding process until all source data are recovered.
15
Objectives The amount of source symbols that cannot be recovered when a constant ratio of encoding symbols are received. In raptor codes, Low-density-paritycheck (LDPC) [15] is introduced as a second layer pre-coding into LT code. LDPC can fix errors of data Most of source symbols can be recovered with a small overhead is sufficient. We try to minimize the number of un-recovered source symbols given a constant overhead ε.
16
Experiments and results Tags are encoded as an individual : v(i) Initial values of tags are set as 1/|v| uniformly Applying CMA-ES without any customization or modification One hundred independent runs of simulation for each function evaluation. Two experiments: Minimizing the expected number of encoding symbols for full decoding The average number of source symbols that cannot be recovered for a constant ε = 1.1 is considered
17
Overhead We minimize the overhead ε for different k sizes
18
The expected overhead of robust soliton distribution
21
Failure rate We are concerned with how many source symbols can be recovered in the second set of experiments. The objective value is the average number of source symbols that cannot be recovered with a constant overhead ε.
22
ε =1.1
24
Conclusion Algorithmically optimize the degree distribution adopted in LT code Evolutionary computation CMA-ES was indeed capable of finding good degree distributions for different purposes without any guideline or human intervention. Two sets of experiments: To minimize the overhead To reduce the decoding failure rate. The optimized overhead was decreased as least 10% The results of failure rate minimization were also remarkably promising
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.