Download presentation
Presentation is loading. Please wait.
1
Chapter 10: Error-Control Coding
2
Errors An error occurs when a transmitted bit is a 1 and its corresponding received bit becomes a 0 or a 0 becomes a 1. Virtually all digital transmission systems introduce errors, even after they have been optimally designed. Sources of errors may include: white noise (e.g., a hissing noise on the phone) impulse noise (e.g., a scratch on CD/DVD) crosstalk (e.g., hearing another phone conversion) echo (e.g., hearing talker’s or listener’s voice again) interference (e.g., unwanted signals due to frequency-reuse in cellular systems) multipath and fading (e.g., due to reflected, refracted paths in mobile systems) thermal and shot noise (e.g., due to the transmitting and receiving equipment) In random errors, the bit errors occur independently of each other, and usually when one bit has changed, its neighbouring bits remain correct (unchanged). In burst errors, two or more bits in a row have usually changed, in that periods of low-error rate transmission are interspersed with periods in which clusters of errors occur. Chapter 10
3
Methods of Controlling Errors
The central concept in detecting and/or correcting errors is redundancy in that the number of bits transmitted is intentionally over and above the required information bits. The transmitter sends the information bits along with some extra (also known as parity) bits. The receiver uses the redundant bits to detect or correct corrupted bits, and then removes the redundant bits. In error detection, which is simpler than error correction, we are simply interested only to find out if errors have occurred. However, every error-detection technique will fail to detect some errors. Error detection in a received data unit (bit stream) is the prerequisite to the retransmission request. A digital communication system, which requires a data unit to be retransmitted as necessary until it is correctly received, is called an automatic repeat request (ARQ) system. In error correction, we need to find out which bits are in error, i.e., to determine their locations in the received bit stream. The techniques that introduce redundancy to allow for correction of errors are collectively known as forward error correction (FEC) coding techniques. Error correcting codes are thus more complex than error detecting codes, and require more redundancy (parity) bits. Error correction generally yields a lower bit error rate, but at the expense of significantly higher transmission bandwidth and more decoding complexity. Chapter 10
4
Detectable & Undetectable Error Patterns for 2-Dimensional Parity-Check Code: Examples
No errors One error (detected) Two errors Three errors Four errors (undetected) Note: Arrows indicate failed check bits Chapter 10
5
Procedure to Determine Checksum
Steps Checksum generator at the transmit end Checksum checker at the receive end 1 2 3 The sum is complemented, i.e., all bits are inverted, to form the checksum. The sum is complemented, i.e., all bits are inverted, to form a new checksum. 4 If the value of new checksum is 0, the message is accepted; otherwise, it is rejected. Chapter 10
6
Checksum Generator and Checker: An Example
Carry from 8th column Carry from 7th column Carry from 6th column Carry from 5th column Carry from 4th column Carry from 3rd column Carry from 2nd column Carry from 1st column Received checksum Partial sum Sum Checksum Checksum generator at the transmitter Checksum checker at the receiver Chapter 10
7
CRC Generator and Checker: An Example
Information bits: 1100 CRC generator Received bits: checker CRC generator at the transmitter CRC checker at the receiver Chapter 10
8
Block Diagram for an ARQ System
Return channel Forward channel Storage and controller Source Encoder Modulator Decoder Demodulator Sink Transmitter Receiver Channel Chapter 10
9
1 2 3 Transmitter Receiver ACK NAK Error detected (a) Time 1 2 3 4 5 6
1 2 3 Transmitter Receiver ACK NAK Error detected (a) Time 1 2 3 4 5 6 7 8 9 Transmitter Receiver ACK NAK Error detected (b) Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Transmitter Receiver ACK NAK Error detected (c) Time Chapter 10
10
ARQ Throughput Performance
Stop-and-Wait ARQ Selective-Repeat ARQ Chapter 10
11
Block Codes Chapter 10
12
D + … Gate Parity bits Message bits Codeword 1 2 (a) D + … Gate
D + … Gate Parity bits Message bits Codeword 1 2 (a) D + … Gate Received bits Syndrome output (b)
13
Convolutional Codes Chapter 10
14
1 … 2 ... k Information bits + 3 Encoded bits
15
Convolutional Encoder and Tree Diagram: An Example
00 11 10 01 1 Input bits Encoded output bits
16
Convolutional State Transition Diagram and Trellis Diagram: An Example
10 01 11 00 Input bit 0 Input bit 1 01 11 10 00 Output branch word Encoder state Input bit 0 Input bit 1
17
Steps in the Viterbi Algorithm: An Example
(a) 1 2 3 14 5 4 6 7 8 10 9 11 (b) 1 2 (c) 4 5 (d) 11 13 (e) 24
18
Block Diagram of a TCM Scheme
Trellis-coded modulation (TCM) combines an amplitude and/or phase modulation signaling set with a trellis coding scheme, such as convolutional codes. The number of signal points in the constellation using TCM is larger than what is required when coding is not employed. This additional signal points, at a constant power level, can decrease the minimum Euclidean distance within the constellation, thus decreasing the error rate, while conserving bandwidth. However, this reduction in minimum Euclidean distance can be well compensated by an increase in the distance due to channel coding such that the error rate significantly improves only at the expense of the bandwidth conserved due to the use of a larger set of signal points. 2 1 Signal point
19
Partitioning of 8-PSK Constellation: A TCM Example
111 110 011 010 000 001 1 101 100
20
Interleaving Chapter 10
21
Block Interleaving: a) Interleaver; b) Deinterleaver
Read bits to modulator column by column . . . Write bits from encoder row by row First Last Write bits from demodulator column by column Read bits to decoder row by row (b) Chapter 10
22
Convolutional Interleaving: a) Interleaver; b) Deinterleaver
. . . (a) (b) Chapter 10
23
Concatenated Coding A concatenated code is one that uses two levels of coding to achieve the desired error performance, where the codes are in series. The additional decoding complexity associated with a concatenated code is justified for poor communication channels, such as fading channels, and power-limited communication channels, such as satellite channels. In a concatenated code, the two codes are as follows: i) an inner code, which is usually a low-rate binary convolutional code with soft-decision decoding, can correct most of the random errors, and ii) an outer code, which is usually a high-rate non-binary block code, such as Reed-Solomon code, with hard decision decoding, can correct burst errors. The minimum distance of the concatenated code is the product of the minimum distances of the inner coder and the outer code. The primary reason to use a concatenated code is thus to achieve a low bit error rate with an overall complexity which is less than that which would be required by a single coding operation. Interleaving is usually used in conjunction with a concatenated code to construct a code with very long codewords. The interleaver at the transmitter is between the encoders and the deinterleaver at the receiver is between the decoders. Channel Outer (block) encoder Interleaver Deinterleaver Inner (convolutional) (soft-decision) decoder (hard-decision) Chapter 10
24
Turbo Coding A turbo encoder employs two generally identical recursive systematic short constraint-length linear convolutional encoders in parallel, where one of the encoders is preceded by a large-size pseudo-random interleaver. In a recursive systematic convolutional encoder, the info bits appear directly as part of the encoded bits and the parity check bits by each encoder are generated by a recursive feedback shift register. The output of the turbo encoder contains the info bits and two sets of parity check bits. The pseudo-random interleaver has a large size in the order of tens of thousands of bits. The pseudo-random interleaver takes the info bits at the input and produces bits at the output in a different temporal manner. The interleaver can provide a robust performance regardless of the channel statistics, while tying channel errors together to improve performance. The optimal decoding of turbo codes is an impossible task, for the number of states in the code trellis is quite large. However, the suboptimal iterative turbo decoding algorithm can provide excellent performance. The unique feature of turbo codes lies in iterative decoding, that is the concept of allowing the two decoders with low-complexity to exchange info iteratively. The turbo decoder forms a closed-loop feedback system. The use of feedback in a turbo decoder highlights the fact some info from one decoder to the next is sent in an iterative manner, where decoders have soft-input and soft-output values. In other words, each decoder uses a soft-decoding algorithm, that is, a decoder accepts soft input values and generates soft output values for the other. This closed-loop iteration will repeat multiple times until no significant adjustment is required, that is until a point at which no further improvement in performance is attainable. Once convergence is reached, the decoding process stops and the output of the second decoder is passed through a de-interleaver and hard-limited to produce an estimate of the info bit.
25
Block Diagram for Turbo Coding: Encoder & Decoder
Parity check bits Systematic information bits Information bits Multiplexer & Modulator Encoder 2 Encoder 1 Interleaver Channel Interleaver Hard-limiter MAP Decoder 1 (SISO) Channel Decoder 2 Deinterleaver Demodulator & Demultiplexer Extrinsic Information Intrinsic Decoded bits Received parity check bits (encoder-1) Received parity check bits (encoder-2) Received systematic information bits Decoding Stage 2 Decoding Stage 1 +
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.