Low Density Parity Check codes

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Error Control Code.
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
(speaker) Fedor Groshev Vladimir Potapov Victor Zyablov IITP RAS, Moscow.
 Graph Graph  Types of Graphs Types of Graphs  Data Structures to Store Graphs Data Structures to Store Graphs  Graph Definitions Graph Definitions.
Improving BER Performance of LDPC Codes Based on Intermediate Decoding Results Esa Alghonaim, M. Adnan Landolsi, Aiman El-Maleh King Fahd University of.
1 Channel Coding in IEEE802.16e Student: Po-Sheng Wu Advisor: David W. Lin.
1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson.
Cooperative Multiple Input Multiple Output Communication in Wireless Sensor Network: An Error Correcting Code approach using LDPC Code Goutham Kumar Kandukuri.
Lectures on Network Flows
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
Near Shannon Limit Performance of Low Density Parity Check Codes
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Turbo Codes Azmat Ali Pasha.
CUHK, May 2010 Department of Electrical Engineering École Polytechnique de Montréal Professor of Electrical Engineering Life Fellow of IEEE Fellow, Engineering.
Message Passing for the Coloring Problem: Gallager Meets Alon and Kahale Sonny Ben-Shimon and Dan Vilenchik Tel Aviv University AofA June, 2007 TexPoint.
Low Density Parity Check Codes LDPC ( Low Density Parity Check ) codes are a class of linear bock code. The term “Low Density” refers to the characteristic.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Generalized Communication System: Error Control Coding Occurs In Right Column. 6.
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
CS774. Markov Random Field : Theory and Application Lecture 10 Kyomin Jung KAIST Oct
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Analysis of Iterative Decoding
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 7 Low Density Parity Check Codes.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
GRAPHS CSE, POSTECH. Chapter 16 covers the following topics Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component,
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code:
Combinatorial Algorithms Reference Text: Kreher and Stinson.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Distributed computing using Projective Geometry: Decoding of Error correcting codes Nachiket Gajare, Hrishikesh Sharma and Prof. Sachin Patkar IIT Bombay.
296.3Page :Algorithms in the Real World Error Correcting Codes III (expander based codes) – Expander graphs – Low density parity check (LDPC) codes.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Introduction of Low Density Parity Check Codes Mong-kai Ku.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
15-853Page :Algorithms in the Real World Error Correcting Codes III (expander based codes) – Expander graphs – Low density parity check (LDPC) codes.
Information Theory Linear Block Codes Jalal Al Roumy.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
Part 1: Overview of Low Density Parity Check(LDPC) codes.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Some Computation Problems in Coding Theory
1 Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm.
GRAPHS. Graph Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component, spanning tree Types of graphs: undirected,
Memory-efficient Turbo decoding architecture for LDPC codes
Error-Correcting Code
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
1 Aggregated Circulant Matrix Based LDPC Codes Yuming Zhu and Chaitali Chakrabarti Department of Electrical Engineering Arizona State.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
The Viterbi Decoding Algorithm
Factor Graphs and the Sum-Product Algorithm
Lectures on Network Flows
MAP decoding: The BCJR algorithm
II. Linear Block Codes.
Physical Layer Approach for n
Chris Jones Cenk Kose Tao Tian Rick Wesel
Graphs G = (V, E) V are the vertices; E are the edges.
Error Correction Coding
Presentation transcript:

Low Density Parity Check codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding is not trellis based Iterative decoding More iterations than turbo decoding Simpler operations in each iteration step Not covered by patents!

Introduction A binary parity check matrix Hn-kn C = { vGF(2)n : vHT = 0 } (Regular) LDPC code: The null space of a matrix H which has the following properties: Its J rows are not necessarily linearly independent  1s in each row  1s in each column No pair of columns has more than one common 1  and  are small compared to the number of rows (J) and columns (n) in H Density r = /n = /J Irregular: The number of 1s in rows/columns may vary

Example J = n = 15  =  = 4

Gallager’s LDPC codes Select  and  Form a kk matrix H consisting of  submatrices H1,...,H, each of dimensions kk H1 is a matrix such that its ith row has its  1s confined to columns (i-1)+1 to i. The other submatrices are column permutations of H1 The actual column permutations define the properties of the code

Gallager’s LDPC codes: Example Select  = 4,  = 3 Column permutations for H2 and H3 selected so that no two columns have more than one 1 in common (20,7,6) code

Rows and parity checks Each row of H forms a parity check Let Al be the set of rows {h1(l),...,h(l)} that have a one in position l, i. e. the set of parity checks on symbol l. Since no pair of columns has more than one common 1, any code bit other than bit l is checked by at most one of the rows in Al. Thus there are  rows orthogonal on l, and any error pattern of weight  /2 can be decoded by one-step majority logic decoding. However, (especially when  is small) this gives poor performance Gallager proposed two iterative decoding algorithms, one hard decision based and one soft decision based

Brief overview of graph theoretic notations An undirected graph G(V,E) Degree of a vertex/node : Number of edges adjacent to it Adjacent edges: Connected to the same vertex Path in a graph: Alternating sequence of vertices and edges (starting and ending in a vertex), where all vertices are distinct except for maybe the first and the last A tree: A graph without cycles Connected graph: There is a path between any pair of vertices Bipartite graph: the set of vertices can be partitioned into two disjoint parts; so that each edge goes from a vertex in one part to a vertex in the other

Graphical description of LDPC codes Tanner graph: A bipartite graph G(V,E), where V = V1  V2 V1 is a set of n code-bit nodes v0,...,vn-1, each representing one of the n code bits V2 is a set of J check nodes s0,...,sJ-1, each representing one of the J parity checks There is an edge between v V1 and s V2 iff the code-bit corresponding to v is contained in the parity check s. No cycles of length 4 for the Tanner graph of an LDPC code

Example

Decoding of LDPC codes Several decoding techniques available. Listed in order of increasing complexity (and improving performance): Majority logic decoding (MLG) Bit flipping (BF) Weighted bit flipping (WBF) Iterative decoding based on belief propagation (IDBP) also known as the sum-product algorithm (SPA) A posteriori probability decoding (APP)

Decoding notation Codeword = channel input : v = (v0,..., vn-1), binary Assume BPSK modulation on AWGN channel Channel output : y = (y0,..., yn-1) Hard decisions : z = (z0,..., zn-1), zj = 1 (0) yj > (<) 0 Rows of H : h1,..., hJ Syndrome : s = (s1,..., sJ) = z  HT , sj = z  hj = l=0...n-1zlhj,l Parity failure : at least one syndrome bit nonzero Error pattern e = v + z Syndrome : s = (s1,..., sJ) = e  HT , sj = e  hj = l=0...n-1elhj,l

Majority logic decoding One-step MLG (see Chapter 8*) Let Al be the set of rows {h1(l),...,h(l)} that have a one in position l, i. e. the set of parity checks on symbol l. Set of  parity checks orthogonal on code bit l: Sl = { s = z  h = e  h = l=0...n-1elhl : h Al } If a majority of the parity checks on bit l are satisfied; conclude that el = 0. Otherwise, assume that el = 1. This decoding is guaranteed to result in a correct decoding if at most /2 errors occur. Problem:  is by assumption a small number.

Bit flipping decoding Introduced by Gallager in 1961 Hard decision decoding: First, form the hard decisions z = (z0,..., zn-1) Compute the parity check sums s = (s1,..., sJ) = z  HT , sj = z  hj = l=0...n-1zlhj,l For each bit l, find fl = the number of failed parity checks for bit l Let S = the set of bits for which fl is large Flip the value of the bits in S Repeat from a) until all parity checks are satisfied, or a maximum number of iterations have been reached

Bit flipping decoding (continued) Let S = the set of bits for which fl is large For example: S = the set of bits for which fl exceeds some threshold  that depends on the code parameters: , , minimum distance and channel SNR. If decoding fails, try again with reduced value of  Simple alternative: S = the set of bits for which fl is maximum Comments: If few errors occur, decoding should commence in a few iterations On a poor channel, the number of iterations should be (allowed to grow) large Improvement: Adaptive thresholds

Weighted MLG and BF decoding Use weighting to include soft decision/reliability information in the decoding decision Hard decision decoding: First, form the hard decisions z = (z0,..., zn-1) Weight: | yj|(l)min = min{| yi| : 0in-1, hj,i =1, h Al } The reliability of the jth parity check on l Weighted MLG: Hard decision on El = s Sl (2s-1)| yj|(l)min ...where Sl = { s = z  h = e  h = l=0...n-1elhl : h Al } Weighted BF: Compute the parity checks s = (s1,..., sJ) = z  HT For each bit l, compute El . Flip the value of the bit for which El is largest Repeat from a) until all parity checks are satisfied, or a maximum number of iterations have been reached

The sum-product decoding algorithm Iterative decoding based on belief propagation Symbol-by-symbol SISO: The goal is to compute P( vl | y ) L(vl) = log ( P( vl = 1| y )/P( vl = 0| y ) ) A priori values: plx = P( vl = x), x{0,1} qj,lx,(i) = P(vl = x |check sums  Al \ {hj} at ith iteration} hj = (hj,0, hj,1, ..., hj,n-1); support of hj = B(hj) = { l : hj,l = 1} j,lx,(i) = P(sj=0| vl = x,{vt:tB(hj)\{l}})   tB(hj)\{l} qj,tvt,(i) qj,lx,(i+1) = j,l(i+1)  ht  Al \ {hj} t,lx,(i) j,l(i+1) chosen so that qj,l0,(i+1) +qj,l1,(i+1) =1 P (i)(vl = x | y )=l(i)  hj  Al j,lx,(i-1)  l(i) chosen so that P (i)(vl = 0| y ) + P (i)(vl = 1 | y )=1

The SPA on the Tanner graph qj,lx,(i) = P(vl = x |check sums  Al \ {hj} at ith iteration} j,lx,(i) = P(sj=0| vl = x,{vt:tB(hj)\{l}})   tB(hj)\{l} qj,tvt,(i) qj,lx,(i+1) = j,l(i+1)  ht  Al \ {hj} t,lx,(i) + vl sj j,lx,(i) a,lx,(i) b,lx,(i) qj,mx,(i+1) qj,lx,(i+1) qj,nx,(i+1) qj,lx,(i) qj,mx,(i) qj,nx,(i)

Suggested exercises 17.1-17.11