Download presentation
Presentation is loading. Please wait.
Published byLiana Hewett Modified over 9 years ago
1
Data Communication, Lecture 111 Channel Coding
2
Data Communication, Lecture 112 audio video (analogue) data (digital) Source anti-alias filter A/D Nyquist sampling 6dB / bit Channel Code FEC ARQ parity block convolution pulse shaping filter ISI ASK FSK PSK binary M’ary bits/symbol Modulation channel filter Communications Channel loss interference noise distortion channel filter Baseband Passband Transmit Receive Basic Digital Communications System Demodulation envelope coherent carrier recovery Regeneration matched filter decision threshold timing recovery low pass filter D/A quantisation noise Decode FEC ARQ parity data (digital) audio video (analogue) T s = symbol pulse width B min = 1 / 2T s B = B min (1 + ) M = bits / symbol C = 2 B log 2 M C = B log 2 (S/N + 1)
3
Data Communication, Lecture 113 Cryptography (Ciphering) Source Coding Compression Coding Line CodingError Control Coding Error Correction Coding Error Detection Coding - Secrecy/ Security - Encryption (DES) - Redundancy removal: - Destructive (jpeg, mpeg) - Non-destructive (zip) - Makes bits equal probable - Strives to utilize channel capacity by adding extra bits - for baseband communications - RX synchronization - Spectral shaping for BW requirements - error detection - used in ARQ as in TCP/IP - feedback channel - retransmissions - quality paid by delay = FEC - no feedback channel - quality paid by redundant bits Taxonomy of Coding FEC: Forward Error Correction ARQ: Automatic Repeat Request DES: Data Encryption Standard
4
Data Communication, Lecture 114 Source Coding –Subject of coding for digital communications requires very good knowledge of mathematics –Purpose of source coding: to transform the information in the source into a form best suited for transmission. –Usually, now, an algorithm to realise bit or symbol content compression in addition to (waveform) A/D conversion
5
Data Communication, Lecture 115 Current Algorithms Image compression algorithms –MPEG – moving image –JPEG – static image Voice & Music –Less well standardised… –GSM codecs / MP3 Complementary coder / decoder
6
Data Communication, Lecture 116 Channel Coding Is applied to communication links to improve the reliability of the information being transferred. By adding additional bits to the data stream – which increases the amount of data to be sent – but enable us to detect and even correct errors at the receiver.
7
Data Communication, Lecture 117 Repetition Coding In repetition coding bits are simply repeated several times Can be used for error correction or detection Assume binomial error distribution for n bits: Example: In 3-bit repetition coding encoded word is formed by a simple rule: Code is decoded by majority voting, e.g. for instance Error in decoding is introduced if all the bits are inverted (a code word is swapped into another code word) or two bits are inverted (code word damaged resulting its location on a wrong side of decoding decision region)
8
Data Communication, Lecture 118 Parity-check Coding Note that repetition coding can greatly improve transmission reliability because However, due to repetition transmission rate is reduced. Here the code rate was 1/3 (that is the ration of the bits to be coded to the encoded bits) In parity-check encoding a check bit is formed that indicates number of “1”:s in the word to be coded Even number of “1”:s mean that the the encoded word has always even parity Example: Encoding 2-bit words by even parity is realized by Note that for error detection encoded word parity is checked (how?)
9
Data Communication, Lecture 119 Parity-check Error Probability
10
10 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1110 Parity-check Error Probability (cont.) n Hence we note that parity checking is a very efficient method of error detection: Example: n At the same time information rate was reduced by 9/10 only n Most telecommunication channels are memoryless AWGN channels or fading multipath channels n In memoryless channels bits are considered to be independent n In fading multipath channels transfer function changes as a function of time –Interleaving can be used to make bits more independent This can be realized by block interleaving convolutional interleaving n Problem of interleaving is delay that is 50% smaller for convolutional interleaving. Also their memory requirement is 50% smaller.
11
11 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1111 Block Interleaving n In fading channels received data can experience burst errors that destroy large number of consecutive bits. This is harmful for channel coding n Interleaving distributes burst errors along data stream n A problem of interleaving is introduced extra delay n Example below shows block interleaving: time received power Reception after fading channel 1 0 0 0 1 1 1 0 1 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 1 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 0 0 0 1 0 1 1 1 1 0 1 1 0 1 0 1 Received interleaved data: Block deinterleaving : Recovered data:
12
12 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1112 Basic Approaches n Error detection – involves recognising that part of the received information is in error and requesting a repeat transmission. –ARQ [Automatic Repeat Request] n Error detection and correction – possible with added complexity and without re-transmission. –FEC [Forward Error Correction]
13
13 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1113 Types of ARQ –Stop and Wait ARQ – simplest, transmitter waits after each message for acknowledgement of correct reception (known as ACK). If an error detected receiver sends NAK. New messages must be stored in transmitter buffer while this takes place. –Go Back N ARQ – transmitter continues until a NAK is received which identifies which message was in error. Transmitter back tracks and starts again form this point
14
14 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1114 Types of ARQ…2 –Selective ARQ – a more complex protocol (set of rules) with a buffer in the receiver also. Receiver informs transmitter of specific message or packet that is in error and this only is re-transmitted. Receiver then inserts correction in sequence. –Although complex this is widely used as it maximises the throughput of the transmission channel.
15
15 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1115 code radix block codes convolutional codes systematic codes non-systematic codes binary {0,1} ternary {-1,0,1} octal {0,1,...7}... binary {0,1} ternary {-1,0,1} octal {0,1,...7}... code word and check bits appear separately in encoded word code word and check bits appear separately in encoded word encoded bit blocks are function of adjacent bit blocks linear codes non-linear codes - sum of any two non-zero codes yields another code word - contains all-zero code word - additive inverse: there must be a code that, when summed to another code word, yields all-zero code word - sum of any two non-zero codes yields another code word - contains all-zero code word - additive inverse: there must be a code that, when summed to another code word, yields all-zero code word encoded bit blocks are formed from each data word separately Forward Error Correction Coding (FEC)
16
16 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1116 Digital Transmission Information: - analog:BW & dynamic range - digital:bit rate Information: - analog:BW & dynamic range - digital:bit rate Maximization of information transferred Transmitted power; bandpass/baseband signal BW Transmitted power; bandpass/baseband signal BW Message protection & channel adaptation; convolution, block coding Message protection & channel adaptation; convolution, block coding M-PSK/FSK/ASK..., depends on channel BW & characteristics wireline/wireless constant/variable linear/nonlinear wireline/wireless constant/variable linear/nonlinear Noise Interference Channel Modulator Channel Encoder Channel Encoder Source encoder Source encoder Channel decoder Channel decoder Source decoder Source decoder Demodulator Information sink Information sink Information source Information source Message estimate Received signal (may contain errors) Transmitted signal Interleaving Fights against burst errors Deinterleaving In baseband systems these blocks are missing
17
17 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1117 Representing Codes by Vectors n Hamming distance d(X,Y) is the number of bits that are different in code words n Code strength is measured by minimum Hamming distance: –Codes are more powerful when their minimum Hamming distance d min (over all codes in the code family) is as large as possible n Codes can be mapped into n-dimensional grid: (n,k) encoder k bits in n bits out
18
18 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1118 Hamming Distance and Code Error Detection and Correction Capability n Channel noise can produce non-valid code words that are detected at the receiver n Number of non-valid code words depends on d min, and consequently the number of errors that can be detected at the reception is If l +1 errors produced (for instance by noise/interference), received code word is transformed into another code word n The number of errors that can be corrected is If more bit errors than t is produced, maximum likelihood detection can not decide correctly in which decision region received code belongs (denotes the integer part)
19
19 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1119 Code Efficiency and the Largest Minimum Distance n The largest possible minimum distance is achieved by repetition codes that is where n and k are the number of bits in the encoded word and in the word to be encoded respectively. Code rate is a measure of code efficiency and it is defined by n Note that q=n - k bits are added for error detection/correction n We noticed that repetition codes have a very low efficiency because their rate is only 1/n. On the contrary, parity check codes have much higher efficiency of (n-1)/n and and they have still a relatively good performance. (n,k) encoder k bits in n bits out
20
20 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1120 Hard and Soft Decision Decoding n Hard decision decoding –each received code word is compared to the applied codes and the code with the minimum Hamming distance is selected n Soft decision decoding –decision reliability is estimated from demodulator’s analog voltage after matched filter by Euclidean distance n Finally, the code word with the smallest distance with respect of the all the possible code words is selected as the received code word 0 1 1 1 0 +0.4 +0.2 +1.0 -0.2 +0.6 Decision threshold Decision voltage after matched filter Hard decisions -> 0 0 1 1 0 Soft decision weighting -> Hamming distance to TX code word=1 Euclidean distance to TX code word=0.76 <-Transmitted code word
21
21 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1121 Hard and Soft Decoding (cont.) n Hard decoding: –calculates Hamming distances to all allowed code words –selects the code with the smallest distance n Soft decoding: –calculates Euclidean distance to all allowed code words –selects the code with the smallest distance n Soft decoding yields about 2 dB improvement when compared to hard decisions (requires a high SNR) n Often soft decoding realized by Viterbi decoder. In fading channels bit stream is interleaved to avoid effect of burst errors n Computational complexity in decoding can be reduced by using convolutional codes
22
22 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1122 Basics of Block Coding n The terminology for block coding is that an input block of k bits give rise to an output block of n bits; this is called an (n,k) code. –The increase in block length means that the useful data rate (information transfer rate) is reduced by a factor k/n; this is known as the rate of the code. –The factor 1 - k/n is known as the redundancy of the code
23
23 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1123 Hamming Code Example
24
24 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1124 Hamming Codes Named after their discoverer R.W. Hamming –A rate 4/7 Hamming code as example –Each of the 16 possible four-bit input blocks is coded into a 7-bit output block –The 16 output blocks are chosen from 2 7 = 128 possible seven bit patterns as being most dissimilar. Each differs by 3 bits. If 1 error occurs receiver can correct it If 2 errors occur receiver can recognise it If 3 errors occur they go undetected
25
25 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1125 Block Code Families n Hamming codes are a subset of the more general code family known as BCH (Bose-Chaudhuri- Hocquenhem) codes discovered in 1959 and 1960. –Whereas Hamming can detect up to 2 or correct 1 error general BCH codes can correct any number of errors if the code is long enough, e.g. (11,1023) can correct 255 errors – used in deep space probes.
26
26 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1126 Block Codes n In (n,k) block codes each sequence of k information bits is mapped into a sequence of n (>k) channel inputs in a fixed way regardless of the previous information bits (in contrast to convolutional codes) n The formed code family should be formed such that the code minimum distance and code rate is large -> high error correction/detection capability n A systematic block code: In the encoded word –the first k elements are the same as the message bits –q=n-k bits are the check bits n Therefore a code vector of a block code is or as a partitioned representation (n,k) encoder k bits in n bits out
27
27 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1127 Generated Hamming Codes n Going through all the combinations of the input vector X, yields then all the possible output vectors n Note that for linear codes the minimum distance of each code word is the weight w or the number of “1” on each code word (distance to ‘00...0’ code word) n For Hamming codes the minimum distance is 3
28
28 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Data Communication, Lecture 1128 Decoding Block Codes by Hard Decisions n A brute-force method for error correction of a block code would include comparison to all possible same length code structures and choosing the one with the minimum Hamming distance when compared to the received code n In practice applied codes can be very long and the extensive comparison would require much time and memory. For instance, to get 9/10 code rate with a Hamming code requires that n This equation fulfills if the code length is at least k=57, that results n = 10k/9=63. n There are different block codes in this case! -> Decoding by direct comparison would be quite unpractical!
29
Data Communication, Lecture 1129 Convolutional Coding –Operates serially on the incoming bit stream and the output block of n code digits generated by the coder depends not only on the block of k input digits but also, because there is memory in the coder the previous K input frames. The property K is known as the constraint length of the code. –An optimum decoding algorithm, called Viterbi decoding, uses a similar procedure.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.