Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Talk Overview Introduction (13 slides) Wiring Complexity ( 9 slides) Logic Complexity (7 slides)
Reliable Communication over Unreliable Channels Channel is the means by which information is communicated from sender to receiver Sender chooses X Channel generates Y from conditional probability distribution P(Y|X) Receiver observes Y channel P(Y|X) Y X
Shannon’s Channel Coding Theorem Using the channel n times, we can communicate k bits of information with probability of error as small as we like as long as as long as n is large enough. C is a number that characterizes any channel. The same is impossible if R>C.
The Coding Strategy Encoder chooses the m th codeword in codebook C and transmits it across the channel Decoder observes the channel output y and generates m’ based on the knowledge of the codebook C and the channel statistics. Decoder Encoder Channel
Linear Codes A linear code C can be defined in terms of either a generator matrix or parity-check matrix. Generator matrix G (k×n) Parity-check matrix H (n-k×n)
Regular LDPC Codes LDPC Codes linear codes defined in terms of H. The number of ones in each column of H is a fixed number λ. The number of ones in each row of H is a fixed number ρ. Typical parameters for Regular LDPC codes are (λ,ρ)=(3,6).
Graph Representation of LDPC Codes H is represented by a bipartite graph. nodes v (degree λ) on the left represent variables. Nodes c (degree ρ)on the right represent equations:... Variable nodes Check nodes
Message-Passing Decoding of LDPC Codes Message Passing (or Belief Propagation) decoding is a low-complexity algorithm which approximately answers the question “what is the most likely x given y?” MP recursively defines messages m v,c (i) and m c,v (i) from each node variable node v to each adjacent check node c, for iteration i=0,1,...
Two Types of Messages... Liklihood Ratio For y 1,...y n independent conditionally on x: Probability Difference For x 1,...x n independent:
...Related by the Biliniear Transform Definition: Properties:
Variable to Check Messages On any iteration i, the message from v to c is:... v c
Check to Variable Messages On any iteration, the message from c to v is:... v c
Decision Rule After sufficiently many iterations, return the likelihood ratio:
Theorem about MP Algorithm If the algorithm stops after r iterations, then the algorithm returns the maximum a posteriori probability estimate of x v given y within radius r of v. However, the variables within a radius r of v must be dependent only by the equations within radius r of v, v r...
Wiring Complexity
Physical Implementation (VLSI) We have seen that the MP decoding algorithm for LDPC codes is defined in terms of a graph Nodes perform local computation Edges carry messages from v to c, and c to v Instantiate this graph on a chip! Edges →Wires Nodes →Logic units
Complexity vs. Performance Longer codes provide: More efficient use of the channel (eg. less power used over the AWGN channel) Faster throughput for fixed technology and decoding parameters (number of iterations) Longer codes demand: More logic resources Way more wiring resources
The Wiring Problem The number of edges in the graph grows like the number of nodes n. The length of the edges in a random graph also grows like. A random graph
Graph Construction? Idea: find a construction that has low wire- length and maintains good performance... Drawback: it is difficult to construct any graph that has the performance of random graph.
A Better Solution: Use an algorithm which generates a graph at random, but with a preference for: Short edge length Quantities related to code performance
Conventional Graph Wisdom Short loops give rise to dependent messages (which are assumed to be independent) after a small number of iterations, and should be avoided.
Simulated Annealing! Simulated annealing approximately minimizes an Energy Function over a Solution space. Requires a good way to traverse the solution space.
Generating LDPC graphs with Simulated Annealing Define energy function with two components: Wirelength Loopiness traverse the space by picking two edges at random and do:
Results of Simulated Annealing The graph on the right has nearly identical performance to the one shown previously A graph generated by Simulated Annealing
Logic Complexity
Complexity of Classical Algorithm Original algorithm defines messages in terms of arithmetic operations over real numbers: However, this implies floating-point addition, multiplication, and even division!
A modified Algorithm We define a modified algorithm in which all messages are their logarithms in the original scheme The channel message λ is similarly replaced by it's logarithm.
Quantization Replaced a product by a sum, but now we have a transcendental function φ. However, if we quantize the messages, we can pre-compute φ for all values!
Quantized MP Performance The graph to the following page shows the bit error rate for a regular (3,6) of length n=10,000 code using between 2 and 4 bits of quantization. (Some error floors predicted by density evolution, some are not)
Quantization Tradeoffs A quantizer is characterized by its range and granularity For fixed channel quantization: A finely granulated quantizer (Q 1 ) performs well at low SNR. However, the quantizer must be broadened (Q 2 ) to avoid saturation, and resulting error floor. Q1Q1 Q2Q2
Conclusion There is a tradeoff between logic complexity and performance Nearly optimal performance (+.1 dB = × 1.03 power) is achievable with 4-bit messages. More work is needed to avoid error-floors due to quantization.