Jing Jiang and Krishna R. Narayanan Wireless Communication Group

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Iterative Equalization and Decoding
Feedback Reliability Calculation for an Iterative Block Decision Feedback Equalizer (IB-DFE) Gillian Huang, Andrew Nix and Simon Armour Centre for Communications.
Cyclic Code.
Error Control Code.
Information and Coding Theory
Soft Decision Decoding Algorithms of Reed-Solomon Codes
The Impact of Channel Estimation Errors on Space-Time Block Codes Presentation for Virginia Tech Symposium on Wireless Personal Communications M. C. Valenti.
Enhancing Secrecy With Channel Knowledge
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson.
Arbitrary Bit Generation and Correction Technique for Encoding QC-LDPC Codes with Dual-Diagonal Parity Structure Chanho Yoon, Eunyoung Choi, Minho Cheong.
June 4, 2015 On the Capacity of a Class of Cognitive Radios Sriram Sridharan in collaboration with Dr. Sriram Vishwanath Wireless Networking and Communications.
By Hua Xiao and Amir H. Banihashemi
Near Shannon Limit Performance of Low Density Parity Check Codes
Division of Engineering and Applied Sciences March 2004 Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Generalized Communication System: Error Control Coding Occurs In Right Column. 6.
Receiver Performance for Downlink OFDM with Training Koushik Sil ECE 463: Adaptive Filter Project Presentation.
Mario Vodisek 1 HEINZ NIXDORF INSTITUTE University of Paderborn Algorithms and Complexity Erasure Codes for Reading and Writing Mario Vodisek ( joint work.
Improving the Performance of Turbo Codes by Repetition and Puncturing Youhan Kim March 4, 2005.
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
Analysis of Iterative Decoding
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Multilevel Coding and Iterative Multistage Decoding ELEC 599 Project Presentation Mohammad Jaber Borran Rice University April 21, 2000.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 7 Low Density Parity Check Codes.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
Tinoosh Mohsenin and Bevan M. Baas VLSI Computation Lab, ECE Department University of California, Davis Split-Row: A Reduced Complexity, High Throughput.
CODED COOPERATIVE TRANSMISSION FOR WIRELESS COMMUNICATIONS Prof. Jinhong Yuan 原进宏 School of Electrical Engineering and Telecommunications University of.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Iterative Soft-Decision Decoding of Algebraic-Geometric Codes Li Chen Associate Professor School of Information Science and Technology, Sun Yat-sen University,
1 –Mandatory exercise for Inf 244 –Deadline: October 29th –The assignment is to implement an encoder/decoder system.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Iterative Soft Decoding of Reed-Solomon Convolutional Concatenated Codes Li Chen Associate Professor School of Information Science and Technology, Sun.
A Novel technique for Improving the Performance of Turbo Codes using Orthogonal signalling, Repetition and Puncturing by Narushan Pillay Supervisor: Prof.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Basic Characteristics of Block Codes
Introduction of Low Density Parity Check Codes Mong-kai Ku.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Name Iterative Source- and Channel Decoding Speaker: Inga Trusova Advisor: Joachim Hagenauer.
ITERATIVE CHANNEL ESTIMATION AND DECODING OF TURBO/CONVOLUTIONALLY CODED STBC-OFDM SYSTEMS Hakan Doğan 1, Hakan Ali Çırpan 1, Erdal Panayırcı 2 1 Istanbul.
Information Theory Linear Block Codes Jalal Al Roumy.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Part 1: Overview of Low Density Parity Check(LDPC) codes.
Low Density Parity Check codes
Some Computation Problems in Coding Theory
1 Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm.
Iterative detection and decoding to approach MIMO capacity Jun Won Choi.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
A Simple Transmit Diversity Technique for Wireless Communications -M
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
Diana B. Llacza Sosaya Digital Communications Chosun University
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
1 Aggregated Circulant Matrix Based LDPC Codes Yuming Zhu and Chaitali Chakrabarti Department of Electrical Engineering Arizona State.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Rate 7/8 (1344,1176) LDPC code Date: Authors:
Physical Layer Approach for n
Chris Jones Cenk Kose Tao Tian Rick Wesel
Miguel Griot, Andres I. Vila Casado, and Richard D. Wesel
IV. Convolutional Codes
Presentation transcript:

Soft Decision Decoding of RS Codes Using Adaptive Parity Check Matrices Jing Jiang and Krishna R. Narayanan Wireless Communication Group Department of Electrical Engineering Texas A&M University

Reed Solomon Codes Consider an (n,k) RS code over GF(2m), n = 2m-1 Linear block code – e.g. (7,5) RS code over GF(8)  be a primitive element in GF(8) Cyclic shift of any codeword is also a valid codeword RS codes are MDS (dmin = n-k+1) The dual code is also MDS

Introduction Advantages Drawback source RS Encoder interleaving PR Encoder sink hard decision + AWGN RS Decoder BCJR Equalizer de-interleaving RS Coded Turbo Equalization System - + a priori extrinsic interleaving Advantages Guaranteed minimum distance Efficient bounded distance hard decision decoder (HDD) Decoder can handle errors and erasures Drawback Performance loss due to bounded distance decoding Soft input soft output (SISO) decoding is not easy!

Presentation Outline Existing soft decision decoding techniques Iterative decoding based on adaptive parity check matrices Variations of the generic algorithm Applications over various channels Conclusion and future work Existing Reed Solomon (RS) codes soft decision decoding techniques Iterative decoding of RS codes based on adaptive parity check matrices Variations of the generic algorithm Simulation results over different channel models Conclusion and future works

Existing Soft Decoding Techniques

Enhanced Algebraic Hard Decision Decoding Generalized Minimum Distance (GMD) Decoding (Forney 1966): Basic Idea: Erase some of the least reliable symbols Run algebraic hard decision decoding several times Drawback: GMD has a limited performance gain Chase decoding (Chase 1972): Exhaustively flip some of the least reliable symbols Running algebraic hard decision decoding several times Drawback: Has an exponentially increasing complexity Enhanced algebraic hard decision decoding algorithms: Generalized Minimum Distance (GMD) Decoding (Forney 1966) Chase decoding (Chase 1972) Combined Chase & GMD(Tang et al. 2001) Reliability based decoding: Ordered Statistic Decoding (OSD) algorithm (Fossorier & Lin 1995) Variations, e.g., Box and Match (BMA) decoding (Valembois & Fossorier 2003) Algebraic SIHO decoding: Algebraic interpolation based decoding (Koetter & Vardy 2003) Reduced complexity KV algorithm (Gross et al. submitted 2003) Combined Chase & GMD(Tang et al. 2001).

Algebraic Soft Input Hard Output Decoding Algebraic SIHO decoding: Algebraic interpolation based decoding (Koetter & Vardy 2003) Reduced complexity KV algorithm (Gross et al. submitted 2003) Basic ideas: Based on Guruswami and Sudan’s algebraic list decoding Convert the reliability information into a set of interpolation points Generate a list of candidate codewords Pick up the most likely codeword from the codeword list Drawback: The complexity increases with , maximum number of multiplicity.

Reliability based Ordered Statistic Decoding Reliability based decoding: Ordered Statistic Decoding (OSD) (Fossorier & Lin 1995) Box & Match Algorithm(BMA) (Valembois & Fossorier to appear 2004) Basic ideas: Order the received bits according to their reliabilities Make hard decisions on a set of independent reliable bits (MR Basis) Re encode to obtain a list of candidate codewords Drawback: The complexity increases exponentially with the reprocessing order BMA must trade memory for complexity

Trellis based Decoding using the Binary Image Expansion Maximum-likelihood decoding and variations Trellis based decoding using binary image expansion (Vardy & Be’ery ‘91) Reduced complexity version (Ponnampalam & Vucetic 2002) Basic ideas: Binary image expansion of RS Trellis structure construction using the binary image expansion Drawback: Exponentially increasing complexity Work only for very short codes or codes with very small distance Maximum-likelihood SISO decoding and variations Trellis based decoding using the binary image expansions of RS codes over GF(2m) (Vardy & Be’ery 1991) Reduced complexity version (Ponnampalam & Vucetic 2002) Iterative SISO algorithms Sub-trellis structure self-concatenation and iterative decoding of RS codes using their binary image (Ungerboeck 2003) Stochastic shifting based iterative decoding of RS codes using their binary image (Jing & Narayanan, submitted 2003) Iterative decoding via sparse factor graph representations of RS codes based on a fast Fourier transform (FFT) (Yedidia, MERL report 2003)

Binary Image Expansion of RS Codes

Consider the (7,5) RS code Binary image expansion of the parity check matrix of RS(7, 5) over GF(23)

Recent Iterative Techniques Sub-trellis based iterative decoding (Ungerboeck 2003) Self-concatenation structure based on sub-trellis constructed from the parity check matrix Binary image expansion of the parity check matrix of RS(7, 5) over GF(23) Drawbacks: Performance deteriorates due to large number of short cycles Work for short codes with small minimum distances Potential error floor problem in high SNR region

Recent Iterative Techniques (cont’d) Stochastic shifting based iterative decoding (Jing & Narayanan, to appear 2004) Due to the irregularity in the H matrix, iterative decoding favors some bits Taking advantage of the cyclic structure of RS codes Shift by 2 Stochastic shift prevent iterative procedure from getting stuck Best result: RS(63,55) about 0.5dB gain from HDD However, for long codes, this algorithm still doesnt provide good improvement

Remarks on Existing Techniques Most SIHO algorithms are either too complex to implement or having only marginal gain Moreover, SIHO decoders cannot generate soft output directly Trellis-based decoders have exponentially increasing complexity Iterative decoding algorithms do not work for long codes, since the parity check matrices of RS codes are not sparse Most SIHO algorithms are either too complex to implement or have only marginal gain. Moreover, SIHO decoders cannot generate soft output efficiently. Trellis-based ML decoder has exponentially increasing complexity. Iterative decoding algorithms do not work for long codes. Since the parity check matrices of RS codes are not sparse. There’s no known technique to generate a sparse parity check matrix for long RS codes. “Soft decoding of large RS codes as employed in many standard transmission systems, e.g., RS(255,239), with affordable complexity remains an open problem” (Ungerboeck, ISTC2003)

Questions Q: Why doesn’t iterative decoding work for codes with non-sparse parity check matrices? Q: Can we get some idea from the failure of iterative decoder? Most SIHO algorithms are either too complex to implement or have only marginal gain. Moreover, SIHO decoders cannot generate soft output efficiently. Trellis-based ML decoder has exponentially increasing complexity. Iterative decoding algorithms do not work for long codes. Since the parity check matrices of RS codes are not sparse. There’s no known technique to generate a sparse parity check matrix for long RS codes.

How does standard message passing algorithm work? erased bits bit nodes …………. ……….. ? . . . . . . . . . ……………. check nodes If two or more of the incoming messages are erasures the check is erased Otherwise, check to bit message is the value of the bit that will satisfy the check

How does standard message passing algorithm work? bit nodes …………. ……….. . . . . . . . . . ……………. check nodes Small values of vj can be thought of as erasures and hence more than two edges with small vj’s saturate the check

A Few Unreliable Bits “Saturate” the Non-sparse Parity Check Matrix Consider RS(7, 5) over GF(23) Binary image expansion of the parity check matrix of RS(7, 5) over GF(23) Unfortunately, RS codes usually possess no sparse parity check matrices as the codeword length increases. There are no known operation to make the whole parity check matrix sparse. Thus, as the codeword length goes to be large, Consequently, iterative decoding is stuck at some “pseudo-codes” due to only a few unreliable bits. Iterative decoding is stuck due to only a few unreliable bits “saturating” the whole non-sparse parity check matrix

Sparse Parity Check Matrices for RS Codes Can we find an equivalent binary parity check matrix that is sparse? For RS codes, this is not possible! The H matrix is the G matrix of the dual code The dual of an RS code is also an MDS Code Every row has weight at least (N-K)!

Iterative Decoding Based on Adaptive Parity Check Matrix Idea: reduce the sub-matrix corresponding to the unreliable positions to a sparse nature. For example, consider (7,4) Hamming code: transmitted codeword received vector parity check matrix Need animation of stopping pattern of iterative decoding. After the adaptive update, iterative decoding can proceed.

Adaptive Decoding Procedure unreliable bits bit nodes …………. ……….. . . . . . . . . . ……………. check nodes

More Details about the Matrix Adaptive Scheme Consider the previous example: (7,4)Hamming code transmitted codeword received vector parity check matrix We can guaranteed reduce some (n-k)m columns to degree 1 We attempt to chose these to be the least reliable independent bits Least Reliable Basis

Interpretation as an Optimization Procedure Standard iterative decoding procedure is interpreted as gradient descent optimization (Lucas et al. 1998). Proposed algorithm is a generalization, two-stage optimization procedure: Bit reliabilities updating stage (gradient descent) Iterative decoding is applied to generate extrinsic information Extrinsic information is scaled by a damping coefficient and fed to update the bit-level reliabilities Parity check matrix update (change direction) All bit-level reliabilities are sorted by their absolute values Systemize the sub-matrix corresponding to LRB in the parity check matrix The damping coefficient serves to control the convergent dynamics.

A Hypothesis Adaption help gradient descent to converge Stuck at pseudo-equilibrium point Adaption help gradient descent to converge

Complexity Analysis Binary Floating Point Operation Check Node Update Overall Complexity Variable Node Update Matrix Adaption Reliability Ordering Binary Floating Point Operation The complexity is in polynomial time with or Complexity can be even reduced when implemented in parallel

Complexity Comparison Method Dominant Complexity GMD Chase KV OSD Trellis ADP

Variation1: Symbol-level Adaptive Scheme Systemizing the sub-matrix involves undesirable Gaussian elimination. This problem can be detoured via utilizing the structure of RS codes. We implement Symbol-level adaptive scheme. least reliable symbols This step can be efficiently realized using Forney’s algorithm (Forney 1965) binary mapping

Variation2: Degree-2 sub-graph in the unreliable part Reduce the “unreliable” sub-matrix to a sparse sub-graph rather than an identity to improve the asymptotic performance. bit nodes …………. ……….. . . . . . . . . . ……………. check nodes unreliable bits weakly connected

Variation2: Degree-2 sub-graph in the unreliable part (cont’d) Q: How to adapt the parity check matrix?

Variation3: Different grouping of unreliable bits (cont’d) Some bits at the boundary part may also have the wrong sign. Run the proposed algorithm several times, each time with an exchange of some “reliable” and “unreliable” bits at the boundary. Consider the received LLR of an RS(7,5) code: Group1 Group2 ……. A list of candidate codewords are generated using different groups. Pick up the most likely from the list.

Variation4: Partial updating scheme (cont’d) The main complexity comes from updating the bits in the high density part, however, only few bits at the boundary part will be affected. In variable node updating stage: update only the “unreliable” bits in the sparse sub-matrix and a few “reliable” bits at the boundary part. In check node updating stage: make an approximation of the check sum via taking advantage of the ordered reliabilities. Complexity in floating point operation part is reduced to be . ascending reliability Maximum-likelihood SISO decoding and variations Trellis based decoding using the binary image expansions of RS codes over GF(2m) (Vardy & Be’ery 1991) Reduced complexity version (Ponnampalam & Vucetic 2002) Iterative SISO algorithms Sub-trellis structure self-concatenation and iterative decoding of RS codes using their binary image (Ungerboeck 2003) Stochastic shifting based iterative decoding of RS codes using their binary image (Jing & Narayanan, submitted 2003) Iterative decoding via sparse factor graph representations of RS codes based on a fast Fourier transform (FFT) (Yedidia, MERL report 2003)

Applications Q: How do the proposed algorithm and its variations perform? Simulation results: Proposed algorithm and variations over AWGN channel Performance over symbol level fully interleaved slow fading channel RS coded turbo equalization (TE) system over EPR4 channel RS coded modulation over fast fading channel Simulation setups: A “genie aided” HDD is assumed for AWGN and fading channel. In the TE system, all coded bits are interleaved at random. A “genie aided” stopping rule is applied. The performance of the proposed algorithm and its variations are verified via computer simulation. Proposed algorithm is implemented in conjunction with a “genie aided” hard decision decoding over AWGN and fading channels to speed up simulation. In the turbo equalization system, no hard decision is assumed, however, when all bits converges to the correct value, iteration stops.

Additive White Gaussian Noise (AWGN) Channel

AWGN Channels

AWGN Channels (cont’d) Asymptotic performance is consistent with the ML upper-bound.

AWGN Channels (cont’d)

AWGN Channels (cont’d)

Remarks Proposed scheme performs near ML for medium length codes. Symbol-level adaptive updating scheme provides non-trivial gain. Partial updating incurs little penalty with great reduction in complexity. For long codes, proposed scheme is still away from ML decoding. Q: How does it work over other channels? Generic algorithm provides near Symbol level adaptive updating scheme provides comparable performance with Chase and KV algorithms. Partial updating scheme incurs little penalty while greatly reduces the complexity. For long codes, e.g., RS(255,239), the proposed algorithms, though provides significant gain over hard decision decoding, are still quite away ML performance.

Interleaved Slow Fading Channel

Fully Interleaved Slow Fading Channels

Fully Interleaved Slow Fading Channels (cont.)

Turbo Equalization Systems

Embed the Proposed Algorithm in the Turbo Equalization System source RS Encoder interleaving PR Encoder sink hard decision + AWGN RS Decoder BCJR Equalizer de-interleaving RS Coded Turbo Equalization System - + a priori extrinsic interleaving

Turbo Equalization over EPR4 Channels

Turbo Equalization over EPR4 Channels

RS Coded Modulation

RS Coded Modulation over Fast Rayleigh Fading Channels

RS Coded Modulation over Fast Rayleigh Fading Channels (cont’d)

Remarks More noticeable gain is observed for fading channels, especially for symbol-level adaptive scheme. In RS coded modulation scheme, utilizing bit-level soft information seems provide more gain. The proposed TE scheme can combat ISI and performs almost identically as the performance over AWGN channels. The proposed algorithm has a potential “error floor” problem. However, simulation down to even lower FER is impossible. Asymptotic performance analysis is still under investigation. For EPR4 channels, the proposed turbo equalization scheme can mitigate the effect ISI and performs almost identically as the performance over AWGN channels, while other SIHO algorithms are not efficient to generate soft information. For symbol-level fully interleaved slow fading channels, more noticeable gain is observed for RS codes of practical length, especially for symbol-level adaptive scheme. For RS coded modulation over fast Rayleigh fading channels, proposed algorithm also outperforms hard decision decoding via more efficiently utilizing bit level soft information as long as Gray mapping is applied. For long codes, an “error floor” phenomenon of the proposed algorithm is observed. Simulation down to even lower FER is impossible. Asymptotic performance analysis is under investigation.

Conclusion and Future work Iterative decoding of RS codes based on adaptive parity check matrix works favorably for practical codes over various channels. The proposed algorithm and its variations provide a wide range of complexity-performance tradeoff for different applications. More works under investigation: Asymptotic performance bound. Understanding how this algorithm works from an information theoretic perspective, e.g., entropy of ordered statistics. Improving the generic algorithm using more sophisticated optimization schemes, e.g., conjugate gradient method. Iterative decoding of RS codes based on adaptive parity check matrix works favorably for practical codes both over AWGN channels and in TE systems. The proposed algorithm and its variations provide a wide range of complexity-performance tradeoff for different applications. More works under investigation: Asymptotic performance bound. Understanding how this algorithm works from an information theoretic perspective (mutual information of ordered statistics). Improving the generic algorithm using more sophisticated optimization schemes, e.g., conjugate gradient method.

Thank you!