Presentation is loading. Please wait.

Presentation is loading. Please wait.

Error Correction and LDPC decoding

Similar presentations


Presentation on theme: "Error Correction and LDPC decoding"— Presentation transcript:

1 Error Correction and LDPC decoding

2 Error Correction in Communication Systems
noise Transmitter Binary information Corrected frame Corrupted channel Receiver Error: Given the original frame k and the received frame k’, how many corresponding bits differ? Hamming distance (Hamming, 1950). Example: Transmitted frame: Received frame: Number of errors: 3

3 Error Detection and Correction
Add extra information to the original data being transmitted. Frame = k data bits + m bits for error control: n = k + m. Error detection: enough info to detect error. Need retransmissions. Error correction: enough info to detect and correct error. Forward error correction (FEC).

4 Error Correction in Communication Systems
LDPC Introduced Renewed interest in LDPC 1950 1960 1970 1980 1990 2000 We all know Error correction plays a major role in communication systems to increase the transmission reliability and achieve a better error correcting performance with less power. There are several types error correction methods, the timeline here shows the history of most popular codes. LDPC codes were first introduced by Galalgar in 1960s but sine they were Impractical to be implemented they were forgotten. But recently LDPC have received a lot of attention, because of their superior error correction performance. LDPC was the first code to allow data transmission rates close to the theoretical maximum, the Shannon Limit. Impractical to implement when developed in 1963, LDPC codes were forgotten.[11] Turbo codes, discovered in 1993, became the coding scheme of choice in the late 1990s, used for applications such as deep-space satellite communications. However, in the last few years, the advances in low density parity check codes have developed them past turbo codes, and LDPC is the most efficient scheme discovered Template:As of. LDPC is finding use in practical applications such as 10GBase-T ethernet, which sends data at 10 gigabits per second over noisy CAT6 cables. The explosive growth in information technology has produced a corresponding increase of commercial interest in the development of highly efficient data transmission codes as such codes impact everything from signal quality to battery life. Although implementation of LDPC codes has lagged that of other codes, notably the turbo code, the absence of encumbering software patents has made LDPC attractive to some and LDPC codes are positioned to become a standard in the developing market for highly efficient data transmission methods. In 2003, an LDPC code beat six turbo codes to become the error correcting code in the new DVB-S2 standard for the satellite transmission of digital television.[12] In 2008, LDPC beat Convolutional Turbo Codes as the FEC scheme for the ITU-T G.hn standard[13]. G.hn chose LDPC over Turbo Codes because of its lower decoding complexity (especially when operating at data rates close to 1 Gbit/s) and because the proposed Turbo Codes exhibited a significant error floor at the desired range of operation. Turbo codes Hamming codes Practical implementation of codes BCH codes Convolutional codes LDPC beats Turbo and convolutional codes Reed Solomon codes

5 Key terms Encoder : adds redundant bits to the sender's bit stream to create a codeword. Decoder: uses the redundant bits to detect and/or correct as many bit errors as the particular error-control code will allow. Communication Channel: the part of the communication system that introduces errors. Ex: radio, twisted wire pair, coaxial cable, fiber optic cable, magnetic tape, optical discs, or any other noisy medium Additive white Gaussian noise (AWGN) Larger noise makes the distribution wider

6 Important metrics Bit error rate (BER): The probability of bit error.
We want to keep this number small Ex: BER=10-4 means if we have transmitted10,000 bits, there is 1 bit error. BER is a useful indicator of system performance independent of error channel Signal to noise ratio (SNR): quantifies how much a signal has been corrupted by noise. defined as the ratio of signal power to the noise power corrupting the signal. A ratio higher than 1:1 indicates more signal than noise often expressed using the logarithmic decibel scale: Important number: 3dB means signal power is two times noise power

7 Error Correction in Communication Systems
Information k-bit channel Codewordn-bit Receivedword n-bit Decoder (check parity, detect error) Encoder (add parity) Corrected Information k-bit noise Goal: Attain lower BER at smaller SNR Error correction is a key component in communication and storage applications. Coding example: Convolutional, Turbo, and Reed-Solomon codes What can 3 dB of coding gain buy? A satellite can send data with half the required transmit power A cellphone can operate reliably with half the required receive power Bit Error Probability 100 10-1 10-2 10-3 10-4 Uncoded system Convolutional code In recent years there has been an increasing demand for efficient and reliable digital data transmission and storage systems. A major concern is the control of errors that occur during transmission so that data can be reliably reproduced. It is shown that by proper encoding of the information at the transmitter the errors induced by a noisy channel or storage medium can be reduced. (without sacrificing the rate of information transmission or storage as long as the information rate is less than the capacity of the channel.) Since then the use of Coding for error control has become an integral part of digital communications. Common coding techniques are Convolutionals coding, Reed sololmon and turbo codes. The plot on the right shows the probability of errors an uncoded and convolutional coded system versus the energy of transmitted information to noise which is called signal to noise ratio. This plot shows two things: First it shows that with an error correction scheme like convolutional coding, we can reduce the probablity of errors is xx times lower than when not using any error correction at all. Also it shows that there is a 3 dB SNR gain when using this coding system with the same amount of error. What can we do with this 3 db gain? With This 3 dB gain a transmitter can send its signal with half the required power. Another example we can speak with our cellphone reliably in a noisy environment with half the received power. So it is a large benefit. No italics, take out square (put a tube), the parenthesis!!! Error correction plays a key role in communication systems to increase the transmission reliability and achieve a better error correcting performance with less power. The block on the top shows a simplified communication system encoder and decoder. There are several types of error correction codes, including convolutions, reed solomon, turbo and Low density parity check or LDPC codes. The plot on the right shows the bit error rate an uncoded system, a convolutional coding, and an LDPC coding which performs 4 dB better. LDPC codes have shown a very good error performance and that’s the main reason that they have received a lot of attention. A 3 dB coding gain results in half the required trasnmit power, or we can have the receiver sensitivity. LDPC codes show a very good error perofmance and that’s the main reason that they have received a lot of attention. the timeline here shows the history of most popular codes. LDPC codes were first introduced by Galalgar in 1960s but sine they were Impractical to be implemented they were forgotten. If you square the signals then you are dealing with signal power, then If you take the signal magnitude then is Reciever sensitivity. 3 dB Signal to Noise Ratio (dB) Figure courtesy of B. Nikolic, 2003 (modified)

8 LDPC Codes and Their Applications
Low Density Parity Check (LDPC) codes have superior error performance 4 dB coding gain over convolutional codes Standards and applications 10 Gigabit Ethernet (10GBASE-T) Digital Video Broadcasting (DVB-S2, DVB-T2, DVB-C2) Next-Gen Wired Home Networking (G.hn) WiMAX (802.16e) WiFi (802.11n) Hard disks Deep-space satellite missions Signal to Noise Ratio (dB) Bit Error Probability 100 10-1 10-2 10-3 10-4 4 dB Conv. code Uncoded Low density parity check (LDPC) codes have shown a significant error correction performance. The plot shows the error probability of LDPC and as you see here, there is a 4 dB gain over convolutioanal coding and this large coding gain one main reason that LDPC have received a lot of attention. Many recent communication standards such as 10 Gigabit ethernet, Wimax and digital video broadcasting have adopted LDPC coding. WirelessPersonal Area Networks LDPC codes have been recently used in Hard disks and deep space communication. Figure courtesy of B. Nikolic, 2003 (modified)

9 Future Wireless Devices Requirements
Increased throughput 1Gbps for next generation of WiMAX (802.16m) and LTE (Advanced LTE) 2.2 Gbps WirelessHD UWB [Ref] Power budget likely Current smart phones require 100 GOPS within 1 Watt [Ref] Required reconfigurability for different environments A rate-compatible LDPC code is proposed for m [Ref] Required reconfigurability for different communication standards Ex: LTE/WiMax dual-mode cellphones require Turbo codes (used in LTE) and LDPC codes (used in WiMAX) Requires hardware sharing for silicon area saving

10 Future Digital TV Broadcasting Requirements
High definition television for stationery and mobile users DTMB/DMB-TH (terrestrial/mobile), ABS-S (satellite), CMMB (multimedia/mobile) Current Digital TV (DTV) standards are not well-suited for mobile devices Require more sophisticated signal processing and correction algorithms Require Low power Require Low error floor Remove multi-level coding Recently proposed ABS-S LDPC codes (15,360-bit code length, 11 code rates) achieves FER < without concatenation [Ref]

11 Future Storage Devices Requirements
Ultra high-density storage 2 Terabit per square inch [ref] Worsening InterSymbol Interference (ISI) [ref] High throughput Larger than 5 Gbps [ref] Low error floor Lower than [ref] Remove multi-level coding Next generation of Hitachi IDRC read-channel technology [9]

12 Encoding Picture Example
V= Parity Image H×ViT=0 Binary multiplication called syndrome check

13 Decoding Picture Example
Transmitter noise Receiver channel Ethernet cable, Wireless, or Hard disk Iterative message passing decoding Iteration 1 Iteration 5 Iteration 15 Iteration 16

14 LDPC Codes—Parity Check Matrix
Defined by a large binary matrix, called a parity check matrix or H matrix Each row is defined by a parity equation Each parity is generated by a few (not all) information bits Example: 6x 12 H matrix for a12-bit LDPC code No. of columns=12 (i.e. Receivedword (V) = 12 bit) No. of rows= 6 No. of ones per row=3 (row weight) No. of ones per col= 2 (column weight) LDPC codes are defined by a binary matrix which is called parity check matrix or H matrix A small example of the matrix is shown here which defines a 12 bit encoded binary from 6 bit information. The number of ones per row is row weight which is 4 here and number of ones per column is column weight which is 2 here. LDPC codes are also presented by a bipartite graph or Tanner graph which consists of two sets of nodes: check nodes and variable nodes and is shown for this example . Each row of the matrix is mapped to one check node and each column of the matrix is mapped to one variable node. There is an edge between a check node and variable node if there is a one in the matrix for that location. For example row 1 in the matrix corresponds to C1 in the tanner graph which connects to V3,V5 V8 and V10. Now let’s see how these codes are used in a system.

15 LDPC Codes—Tanner Graph
Interconnect representation of H matrix Two sets of nodes: Check nodes and Variable nodes Each row of the matrix is represented by a Check node Each column of matrix is represented by a Variable node A message passing method is used between nodes to correct errors (1) Initialization with Receivedword (2) Messages passing until correct Example: V3 to C1, V5 to C1, V8 to C1, V10 to C1 C2 to V1, C5 to V1 As I said before in the receiver, not only we care about checking the parities but also we want to correct the errors. This graph down here shows the interconnect representation of H matrix, and is called Tanner graph. The graph consists of Two sets of nodes: Check nodes and Variable nodes. Each row of the matrix is represented by a Check node. Each column of matrix is represented by a Variable node. For example row 1 is shown by node C1 and connects to V3, V5, V8 and v11 which is in fact the parity check sums that I showed in the previous slide. In addition, the graph shows the connections from V nodes to C nodes. For example, the matrix shows that there is a 1 in the 2’nd and 5’th row for the first bit of the codeword V1. This infact means that V1 is checked by two check sums and in the graph it is is connected to C1 and C5. To correct all errors, the goal is to make all the parity checks that I showed in the previous slide to be zero. A common decoding method is based on message passing between these two groups of nodes. Checknodes send their decision or vote of each of the v nodes to them. Then variable nodes process these decisions and send updated value back to checknodes. Each node from one group send its decision or vote to the other group. The other group process the votes from different nodes and sends its update back to the other group. Here is one example, how the algorithm works. C2 and C5 that are connected to V1 send their votes based on the checksums. This process for all nodes continues untill all parity checks get satisfied or a maximum number of iterations is reached. We need to know LDPC codes are defined by a binary matrix which is called parity check matrix or H matrix A small example of the matrix is shown here which defines a 12 bit encoded binary from 6 bit information. The number of ones per row is row weight which is 4 here and number of ones per column is column weight which is 2 here. LDPC codes are also presented by a bipartite graph or Tanner graph which consists of two sets of nodes: check nodes and variable nodes and is shown for this example . Each row of the matrix is mapped to one check node and each column of the matrix is mapped to one variable node. There is an edge between a check node and variable node if there is a one in the matrix for that location. For example row 1 in the matrix corresponds to C1 in the tanner graph which connects to V3,V5 V8 and V10. Now let’s see how these codes are used in a system. C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes C 1 2 3 4 5 6 V 7 8 9 10 11 12 Check nodes Variable nodes Receivedword from channel

16 Message Passing: Variable node processing
For Variable node processing, each variable node adds the check node messages which are alpha and the received information from channel which is lambda and pass the sum back to the check nodes. Again it must exclude the checknode message that is going to send the data back to it. For example, V1 adds only C5 to lambda and passes the result to C2. As you noticed, the check node and variable node processing steps do not require very sophisticated operations,. However, The major challenge is mapping these processing nodes in to hardware in order to minimize the communication between these nodes meanwhile provide the smallest circuit area, highest throughput and energy efficiency. α: message from check to variable node β: message from variable to check node λ is the original received information from the channel

17 Message Passing: Check node processing (MinSum)
After check node processing, the next iteration starts with another variable node processing (begins a new iteration) Since there is not much time I don’t go to details of the original decoding algorithm. a reduced complexity decoding algorithm is called MinSum which consists of two iterative steps check node processing and variable. I just want to note that the information we receive is not zeros or ones, they are real values with negative positive numbers. Assume the inputs to the check node processors are called beta, each check node computes the sign value and minimum value among other connecting variable nodes except itself and sends outputs that are alpha back to each of them. But Notice that when calculating the minimum and sign it has exclude the variable node that it want to send the data back to it. For example check node 1, finds the sign and minimum value for V6, V8 and V10 and pass it to V3, and it repeats this step for the other three Variables nodes. To reduce the number of computations, instead of computing minimum for each group, we find the first minimum and second minimum among all 4 variable nodes here and then pass them to variable nodes. Sign Magnitude

18 Example Encoded information V= [1 0 1 0 1 0 1 0 1 1 1 1]
BPSK modulated= [ ] λ (Received data from channel)= [ ] Estimated code= V= ^

19 Syndrome computation Step 1: syndrome check: H.Vt=0? Estimated code=
^ Syndrome = [ ] Sumsyndrome=2 Not ZERO then Error


Download ppt "Error Correction and LDPC decoding"

Similar presentations


Ads by Google