Error Correction and LDPC decoding

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Cyclic Code.
Error Control Code.
Error Correction and LDPC decoding CMPE 691/491: DSP Hardware Implementation Tinoosh Mohsenin 1.
Computer Networking Error Control Coding
Data and Computer Communications Tenth Edition by William Stallings Data and Computer Communications, Tenth Edition by William Stallings, (c) Pearson Education.
Improving BER Performance of LDPC Codes Based on Intermediate Decoding Results Esa Alghonaim, M. Adnan Landolsi, Aiman El-Maleh King Fahd University of.
Cooperative Multiple Input Multiple Output Communication in Wireless Sensor Network: An Error Correcting Code approach using LDPC Code Goutham Kumar Kandukuri.
Cellular Communications
Near Shannon Limit Performance of Low Density Parity Check Codes
Turbo Codes Azmat Ali Pasha.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Generalized Communication System: Error Control Coding Occurs In Right Column. 6.
Digital Data Communications Techniques Updated: 2/9/2009.
Noise and SNR. Noise unwanted signals inserted between transmitter and receiver is the major limiting factor in communications system performance 2.
Low Density Parity Check (LDPC) Code Implementation Matthew Pregara & Zachary Saigh Advisors: Dr. In Soo Ahn & Dr. Yufeng Lu Dept. of Electrical and Computer.
1 INF244 Textbook: Lin and Costello Lectures (Tu+Th ) covering roughly Chapter 1;Chapters 9-19? Weekly exercises: For your convenience Mandatory.
Channel Coding Part 1: Block Coding
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved. 1 Chapter 4 Channel Coding.
Lecture 10: Error Control Coding I Chapter 8 – Coding and Error Control From: Wireless Communications and Networks by William Stallings, Prentice Hall,
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Tinoosh Mohsenin and Bevan M. Baas VLSI Computation Lab, ECE Department University of California, Davis Split-Row: A Reduced Complexity, High Throughput.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Basic Characteristics of Block Codes
Introduction of Low Density Parity Check Codes Mong-kai Ku.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Channel Coding Binit Mohanty Ketan Rajawat. Recap…  Information is transmitted through channels (eg. Wires, optical fibres and even air)  Channels are.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
Part 1: Overview of Low Density Parity Check(LDPC) codes.
Low Density Parity Check codes
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Tufts University. EE194-WIR Wireless Sensor Networks. February 17, 2005 Increased QoS through a Degraded Channel using a Cross-Layered HARQ Protocol Elliot.
FEC Linear Block Coding
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Hamming Distance & Hamming Code
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Tinoosh Mohsenin 2, Houshmand Shirani-mehr 1, Bevan Baas 1 1 University of California, Davis 2 University of Maryland Baltimore County Low Power LDPC Decoder.
Channel Coding and Error Control 1. Outline Introduction Linear Block Codes Cyclic Codes Cyclic Redundancy Check (CRC) Convolutional Codes Turbo Codes.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Fundamentals of Communications. Communication System Transmitter: originates the signal Receiver: receives transmitted signal after it travels over the.
8 Coding Theory Discrete Mathematics: A Concept-based Approach.
Communication Networks: Technology & Protocols
DATA COMMUNICATION AND NETWORKINGS
The Three Main Sources of Transmission Errors
Advanced Computer Networks
Digital Communications
VLSI Architectures For Low-Density Parity-Check (LDPC) Decoders
15-853:Algorithms in the Real World
Q. Wang [USTB], B. Rolfe [BCA]
Introduction to electronic communication systems
Factor Graphs and the Sum-Product Algorithm
Chapter 6.
Coding and Interleaving
Interleaver-Division Multiple Access on the OR Channel
January 2004 Turbo Codes for IEEE n
An Improved Split-Row Threshold Decoding Algorithm for LDPC Codes
Chapter 6.
Error Correction Code (2)
High Throughput LDPC Decoders Using a Multiple Split-Row Method
Physical Layer Approach for n
Chris Jones Cenk Kose Tao Tian Rick Wesel
CT-474: Satellite Communications
Chapter 10: Error-Control Coding
Error Detection and Correction
Reliability and Channel Coding
Introduction Analog and Digital Signal
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Types of Errors Data transmission suffers unpredictable changes because of interference The interference can change the shape of the signal Single-bit.
Presentation transcript:

Error Correction and LDPC decoding

Error Correction in Communication Systems noise Transmitter Binary information Corrected frame Corrupted channel Receiver Error: Given the original frame k and the received frame k’, how many corresponding bits differ? Hamming distance (Hamming, 1950). Example: Transmitted frame: 1110011 Received frame: 1011001 Number of errors: 3

Error Detection and Correction Add extra information to the original data being transmitted. Frame = k data bits + m bits for error control: n = k + m. Error detection: enough info to detect error. Need retransmissions. Error correction: enough info to detect and correct error. Forward error correction (FEC).

Error Correction in Communication Systems LDPC Introduced Renewed interest in LDPC 1950 1960 1970 1980 1990 2000 We all know Error correction plays a major role in communication systems to increase the transmission reliability and achieve a better error correcting performance with less power. There are several types error correction methods, the timeline here shows the history of most popular codes. LDPC codes were first introduced by Galalgar in 1960s but sine they were Impractical to be implemented they were forgotten. But recently LDPC have received a lot of attention, because of their superior error correction performance. LDPC was the first code to allow data transmission rates close to the theoretical maximum, the Shannon Limit. Impractical to implement when developed in 1963, LDPC codes were forgotten.[11] Turbo codes, discovered in 1993, became the coding scheme of choice in the late 1990s, used for applications such as deep-space satellite communications. However, in the last few years, the advances in low density parity check codes have developed them past turbo codes, and LDPC is the most efficient scheme discovered Template:As of. LDPC is finding use in practical applications such as 10GBase-T ethernet, which sends data at 10 gigabits per second over noisy CAT6 cables. The explosive growth in information technology has produced a corresponding increase of commercial interest in the development of highly efficient data transmission codes as such codes impact everything from signal quality to battery life. Although implementation of LDPC codes has lagged that of other codes, notably the turbo code, the absence of encumbering software patents has made LDPC attractive to some and LDPC codes are positioned to become a standard in the developing market for highly efficient data transmission methods. In 2003, an LDPC code beat six turbo codes to become the error correcting code in the new DVB-S2 standard for the satellite transmission of digital television.[12] In 2008, LDPC beat Convolutional Turbo Codes as the FEC scheme for the ITU-T G.hn standard[13]. G.hn chose LDPC over Turbo Codes because of its lower decoding complexity (especially when operating at data rates close to 1 Gbit/s) and because the proposed Turbo Codes exhibited a significant error floor at the desired range of operation. Turbo codes Hamming codes Practical implementation of codes BCH codes Convolutional codes LDPC beats Turbo and convolutional codes Reed Solomon codes

Key terms Encoder : adds redundant bits to the sender's bit stream to create a codeword. Decoder: uses the redundant bits to detect and/or correct as many bit errors as the particular error-control code will allow. Communication Channel: the part of the communication system that introduces errors. Ex: radio, twisted wire pair, coaxial cable, fiber optic cable, magnetic tape, optical discs, or any other noisy medium Additive white Gaussian noise (AWGN) Larger noise makes the distribution wider

Important metrics Bit error rate (BER): The probability of bit error. We want to keep this number small Ex: BER=10-4 means if we have transmitted10,000 bits, there is 1 bit error. BER is a useful indicator of system performance independent of error channel Signal to noise ratio (SNR): quantifies how much a signal has been corrupted by noise. defined as the ratio of signal power to the noise power corrupting the signal. A ratio higher than 1:1 indicates more signal than noise often expressed using the logarithmic decibel scale: Important number: 3dB means signal power is two times noise power

Error Correction in Communication Systems Information k-bit channel Codewordn-bit Receivedword n-bit Decoder (check parity, detect error) Encoder (add parity) Corrected Information k-bit noise Goal: Attain lower BER at smaller SNR Error correction is a key component in communication and storage applications. Coding example: Convolutional, Turbo, and Reed-Solomon codes What can 3 dB of coding gain buy? A satellite can send data with half the required transmit power A cellphone can operate reliably with half the required receive power Bit Error Probability 100 10-1 10-2 10-3 10-4 Uncoded system Convolutional code In recent years there has been an increasing demand for efficient and reliable digital data transmission and storage systems. A major concern is the control of errors that occur during transmission so that data can be reliably reproduced. It is shown that by proper encoding of the information at the transmitter the errors induced by a noisy channel or storage medium can be reduced. (without sacrificing the rate of information transmission or storage as long as the information rate is less than the capacity of the channel.) Since then the use of Coding for error control has become an integral part of digital communications. Common coding techniques are Convolutionals coding, Reed sololmon and turbo codes. The plot on the right shows the probability of errors an uncoded and convolutional coded system versus the energy of transmitted information to noise which is called signal to noise ratio. This plot shows two things: First it shows that with an error correction scheme like convolutional coding, we can reduce the probablity of errors is xx times lower than when not using any error correction at all. Also it shows that there is a 3 dB SNR gain when using this coding system with the same amount of error. What can we do with this 3 db gain? With This 3 dB gain a transmitter can send its signal with half the required power. Another example we can speak with our cellphone reliably in a noisy environment with half the received power. So it is a large benefit. No italics, take out square (put a tube), the parenthesis!!! Error correction plays a key role in communication systems to increase the transmission reliability and achieve a better error correcting performance with less power. The block on the top shows a simplified communication system encoder and decoder. There are several types of error correction codes, including convolutions, reed solomon, turbo and Low density parity check or LDPC codes. The plot on the right shows the bit error rate an uncoded system, a convolutional coding, and an LDPC coding which performs 4 dB better. LDPC codes have shown a very good error performance and that’s the main reason that they have received a lot of attention. A 3 dB coding gain results in half the required trasnmit power, or we can have the receiver sensitivity. LDPC codes show a very good error perofmance and that’s the main reason that they have received a lot of attention. the timeline here shows the history of most popular codes. LDPC codes were first introduced by Galalgar in 1960s but sine they were Impractical to be implemented they were forgotten. If you square the signals then you are dealing with signal power, then If you take the signal magnitude then is Reciever sensitivity. 3 dB 0 1 2 3 4 5 6 7 8 Signal to Noise Ratio (dB) Figure courtesy of B. Nikolic, 2003 (modified)

LDPC Codes and Their Applications Low Density Parity Check (LDPC) codes have superior error performance 4 dB coding gain over convolutional codes Standards and applications 10 Gigabit Ethernet (10GBASE-T) Digital Video Broadcasting (DVB-S2, DVB-T2, DVB-C2) Next-Gen Wired Home Networking (G.hn) WiMAX (802.16e) WiFi (802.11n) Hard disks Deep-space satellite missions Signal to Noise Ratio (dB) Bit Error Probability 100 10-1 10-2 10-3 10-4 0 1 2 3 4 5 6 7 8 4 dB Conv. code Uncoded Low density parity check (LDPC) codes have shown a significant error correction performance. The plot shows the error probability of LDPC and as you see here, there is a 4 dB gain over convolutioanal coding and this large coding gain one main reason that LDPC have received a lot of attention. Many recent communication standards such as 10 Gigabit ethernet, Wimax and digital video broadcasting have adopted LDPC coding. WirelessPersonal Area Networks LDPC codes have been recently used in Hard disks and deep space communication. Figure courtesy of B. Nikolic, 2003 (modified)

Future Wireless Devices Requirements Increased throughput 1Gbps for next generation of WiMAX (802.16m) and LTE (Advanced LTE) 2.2 Gbps WirelessHD UWB [Ref] Power budget likely Current smart phones require 100 GOPS within 1 Watt [Ref] Required reconfigurability for different environments A rate-compatible LDPC code is proposed for 802.16m [Ref] Required reconfigurability for different communication standards Ex: LTE/WiMax dual-mode cellphones require Turbo codes (used in LTE) and LDPC codes (used in WiMAX) Requires hardware sharing for silicon area saving

Future Digital TV Broadcasting Requirements High definition television for stationery and mobile users DTMB/DMB-TH (terrestrial/mobile), ABS-S (satellite), CMMB (multimedia/mobile) Current Digital TV (DTV) standards are not well-suited for mobile devices Require more sophisticated signal processing and correction algorithms Require Low power Require Low error floor Remove multi-level coding Recently proposed ABS-S LDPC codes (15,360-bit code length, 11 code rates) achieves FER < 10-7 without concatenation [Ref]

Future Storage Devices Requirements Ultra high-density storage 2 Terabit per square inch [ref] Worsening InterSymbol Interference (ISI) [ref] High throughput Larger than 5 Gbps [ref] Low error floor Lower than 10-15 [ref] Remove multi-level coding Next generation of Hitachi IDRC read-channel technology [9]

Encoding Picture Example V= Parity Image H×ViT=0 Binary multiplication called syndrome check

Decoding Picture Example Transmitter noise Receiver channel Ethernet cable, Wireless, or Hard disk Iterative message passing decoding Iteration 1 Iteration 5 Iteration 15 Iteration 16

LDPC Codes—Parity Check Matrix Defined by a large binary matrix, called a parity check matrix or H matrix Each row is defined by a parity equation Each parity is generated by a few (not all) information bits Example: 6x 12 H matrix for a12-bit LDPC code No. of columns=12 (i.e. Receivedword (V) = 12 bit) No. of rows= 6 No. of ones per row=3 (row weight) No. of ones per col= 2 (column weight) LDPC codes are defined by a binary matrix which is called parity check matrix or H matrix A small example of the matrix is shown here which defines a 12 bit encoded binary from 6 bit information. The number of ones per row is row weight which is 4 here and number of ones per column is column weight which is 2 here. LDPC codes are also presented by a bipartite graph or Tanner graph which consists of two sets of nodes: check nodes and variable nodes and is shown for this example . Each row of the matrix is mapped to one check node and each column of the matrix is mapped to one variable node. There is an edge between a check node and variable node if there is a one in the matrix for that location. For example row 1 in the matrix corresponds to C1 in the tanner graph which connects to V3,V5 V8 and V10. Now let’s see how these codes are used in a system.

LDPC Codes—Tanner Graph Interconnect representation of H matrix Two sets of nodes: Check nodes and Variable nodes Each row of the matrix is represented by a Check node Each column of matrix is represented by a Variable node A message passing method is used between nodes to correct errors (1) Initialization with Receivedword (2) Messages passing until correct Example: V3 to C1, V5 to C1, V8 to C1, V10 to C1 C2 to V1, C5 to V1 As I said before in the receiver, not only we care about checking the parities but also we want to correct the errors. This graph down here shows the interconnect representation of H matrix, and is called Tanner graph. The graph consists of Two sets of nodes: Check nodes and Variable nodes. Each row of the matrix is represented by a Check node. Each column of matrix is represented by a Variable node. For example row 1 is shown by node C1 and connects to V3, V5, V8 and v11 which is in fact the parity check sums that I showed in the previous slide. In addition, the graph shows the connections from V nodes to C nodes. For example, the matrix shows that there is a 1 in the 2’nd and 5’th row for the first bit of the codeword V1. This infact means that V1 is checked by two check sums and in the graph it is is connected to C1 and C5. To correct all errors, the goal is to make all the parity checks that I showed in the previous slide to be zero. A common decoding method is based on message passing between these two groups of nodes. Checknodes send their decision or vote of each of the v nodes to them. Then variable nodes process these decisions and send updated value back to checknodes. Each node from one group send its decision or vote to the other group. The other group process the votes from different nodes and sends its update back to the other group. Here is one example, how the algorithm works. C2 and C5 that are connected to V1 send their votes based on the checksums. This process for all nodes continues untill all parity checks get satisfied or a maximum number of iterations is reached. We need to know LDPC codes are defined by a binary matrix which is called parity check matrix or H matrix A small example of the matrix is shown here which defines a 12 bit encoded binary from 6 bit information. The number of ones per row is row weight which is 4 here and number of ones per column is column weight which is 2 here. LDPC codes are also presented by a bipartite graph or Tanner graph which consists of two sets of nodes: check nodes and variable nodes and is shown for this example . Each row of the matrix is mapped to one check node and each column of the matrix is mapped to one variable node. There is an edge between a check node and variable node if there is a one in the matrix for that location. For example row 1 in the matrix corresponds to C1 in the tanner graph which connects to V3,V5 V8 and V10. Now let’s see how these codes are used in a system. C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes Receivedword from channel

Message Passing: Variable node processing For Variable node processing, each variable node adds the check node messages which are alpha and the received information from channel which is lambda and pass the sum back to the check nodes. Again it must exclude the checknode message that is going to send the data back to it. For example, V1 adds only C5 to lambda and passes the result to C2. As you noticed, the check node and variable node processing steps do not require very sophisticated operations,. However, The major challenge is mapping these processing nodes in to hardware in order to minimize the communication between these nodes meanwhile provide the smallest circuit area, highest throughput and energy efficiency. α: message from check to variable node β: message from variable to check node λ is the original received information from the channel

Message Passing: Check node processing (MinSum) After check node processing, the next iteration starts with another variable node processing (begins a new iteration) Since there is not much time I don’t go to details of the original decoding algorithm. a reduced complexity decoding algorithm is called MinSum which consists of two iterative steps check node processing and variable. I just want to note that the information we receive is not zeros or ones, they are real values with negative positive numbers. Assume the inputs to the check node processors are called beta, each check node computes the sign value and minimum value among other connecting variable nodes except itself and sends outputs that are alpha back to each of them. But Notice that when calculating the minimum and sign it has exclude the variable node that it want to send the data back to it. For example check node 1, finds the sign and minimum value for V6, V8 and V10 and pass it to V3, and it repeats this step for the other three Variables nodes. To reduce the number of computations, instead of computing minimum for each group, we find the first minimum and second minimum among all 4 variable nodes here and then pass them to variable nodes. Sign Magnitude

Example Encoded information V= [1 0 1 0 1 0 1 0 1 1 1 1] BPSK modulated= [-1 1 -1 1 -1 1 -1 1 -1 -1 -1 -1] λ (Received data from channel)= [ -9.1 4.9 -3.2 3.6 -1.4 3.1 0.3 1.6 -6.1 -2.5 -7.8 -6.8] Estimated code= V= 1 0 1 0 1 0 0 0 1 1 1 1 ^

Syndrome computation Step 1: syndrome check: H.Vt=0? Estimated code= ^ Syndrome = [0 0 1 1 0 0] Sumsyndrome=2 Not ZERO then Error