Download presentation
Presentation is loading. Please wait.
Published byJerome George Stevens Modified over 9 years ago
1
struggle ! two practical classes of channel (error- correcting) codes cyclic codes (巡回符号) a subclass of linear codes linear-time encoding and error detection (not correction) convolutional codes (畳み込み符号) different principle from linear codes soft-decision optimal decoding is available both codes widely used today expected to be replaced by “next-generation” codes 1
2
what’s wrong with linear codes? 2
3
channel codes evolution 3 general channel (error-correcting) codes linear codes (linear block codes) convolutional codes cyclic codes LDPC codesturbo codes BCH Reed-Solomon Hamming Golay more structure... more efficiency... more power... next class this class (& next class)
4
cyclic codes 4 codes linear codes cyclic codes
5
preliminary (1) Binary vectors can be written as binary polynomials (多項式). 5 11101x 4 + x 3 + x 2 + x + 1 addition (=subtraction) ⇒ XOR x 4 + x 3 + x 2 + 1 x 3 + x + 1 +) x 4 + x 2 + x 11101 01011 10110 +) multiplication ⇒ (unnamed operation) x 4 + x 3 + x 2 + 1 x 3 + x + 1 ×) x 4 + x 3 + x 2 + 1 x 5 + x 4 + x 3 + x x 7 + x 6 + x 5 + x 3 x 7 + x 6 + x 3 + x 2 + x + 1 11001111 11101 01011 ×) 11101 multiplication with x m = left-shift of m bits
6
preliminary (2) 6 division ⇒ (unnamed operation) x 6 + x 4 x 4 + x 3 + x 2 + 1) x 2 + x + 1 x 6 + x 5 + x 4 + x 2 x 5 + x 2 x 5 + x 4 + x 3 + x x 4 + x 3 + x 2 + x x 4 + x 3 + x 2 + 1 x + 1 11101 ) 1010000 11101 10010 11101 11110 11101 11 111 the division circuit is easily implemented by a shift register
7
1. store p(x) to the registers 2. if MSB = 1, then AND gates are activated, and registers are XOR’ed with q(x) 3. left-shift, and go to step 1 division circuit (1) divide p(x) = x 6 + x 4 by q(x) = x 4 + x 3 + x 2 + 1: 7 11101 1 0 1 0 0 0 0 q(x)q(x) p(x)p(x) quotient ( 商 ) remainder ( 剰余 ) MSB divisor dividend
8
division circuit (2) 8 01001 11101 0 11101 ) 1010000 11101 010010 11101 11110 11101 11 111 10100 11101 0
9
definition of cyclic codes 9 1000000111101) 1101001 11101 0 1101 G(x)G(x) x 7 +1
10
construction of a cyclic code Step 1: choose an degree-m polynomial G(x) that divides x n + 1. Step 2: C = {multiples of G(x) with degree < n} n = 7, m = 4: G(x) = x 4 + x 3 + x 2 + 1 10 G(x)G(x) (x 2 +x+0)× C=C= 0000000 0011101 0111010 0100111 1110100 1101001 1001110 1010011 G(x)G(x) (x 2 +x+1)× G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) =
11
properties of cyclic codes (1) Anyway, we have defined a set of vectors, but is it a linear code? Lemma: a cyclic code is a linear code proof: Show that c 1 +c 2 C for any c 1, c 2 C; c 1 C c 1 = f 1 (x)G(x) c 2 C c 2 = f 2 (x)G(x) c 1 +c 2 = (f 1 (x)+f 2 (x)) G(x) C 11
12
properties of cyclic codes (2) Lemma: if (a n-1, a n-2,..., a 0 ) C, then (a n-2,..., a 0, a n-1 ) C proof: Let W(x) = a n-1 x n-1 +... + a 0 and W’(x) = a n-2 x n-1 +... + a 0 x + a n-1. W(x) is a multiple of G(x), because (a n-1, a n-2,..., a 0 ) C W’(x) = a n-2 x n-1 +... + a 0 x + a n-1 = a n-1 x n + a n-2 x n-1 +... + a 0 x + a n-1 + a n-1 x n = xW(x) + a n-1 (x n + 1) 12 multiple of G(x)multiple of G(x)... construction step 1 W’(x) is a multiple of G(x), and (a n-2,..., a 0, a n-1 ) C. A cyclic code C is closed for a cyclic shift.
13
three approaches for encoding three approaches for an encoding procedure: matrix approach use a generator matrix... no advantage of cyclic codes multiplication approach codeword = (information bits) × G(x) the code not systematic (cf. p.10) division approach slightly complicated (for me) make the code systematic easily implemented by shift registers 13
14
3-step encoding by division 1. A(x) = polynomial of information bits 2. B(x) = remainder of A(x)x m divided by G(x) 3. codeword = A(x)x m + B(x) encode 011, with n = 7, k = 3, m = 4, G(x) = x 4 + x 3 + x 2 + 1 1. A(x) = x + 1 2. A(x)x 4 = x 5 + x 4 = x(x 4 + x 3 + x 2 + 1) + (x 3 + x), and B(x) = x 3 + x 3. A(x)x 4 + B(x)= x 5 + x 4 + x 3 + x, the codeword = 0111010 14 A(x)A(x) (not necessary) G(x)G(x) B(x)B(x) x4x4 ÷ divisor dividend quotient remainder W(x) = A(x)x 4 + B(x)
15
did we really make “encoding”? simple question: Is A(x)x m + B(x) really a codeword? Is A(x)x m + B(x) divided by G(x)? Yes, note that... B(x) is a remainder of A(x)x m in the binary world, A(x)x m + B(x) = A(x)x m – B(x) “A(x)x m – B(x)” = “removing the remainder” if A(x)x m – B(x) is divided by G(x), then there is no remainder... 15
16
example n = 7, k = 3, m = 4, G(x) = x 4 + x 3 + x 2 + 1 16 data 000 001 010 011 100 101 110 111 A(x) x 2 +x+0 x 2 +x+1 A(x)x 4 0000000 0010000 0100000 0110000 1000000 1010000 1100000 1110000 B(x) x 3 +x 2 +x+0 x 3 +x 2 +x+1 A(x)x m + B(x) 0000000 0011101 0100111 0111010 1001110 1010011 1101001 1110100 systematic code encoder ≈ division circuit O(n) < O(n 2 ) of matrix operation
17
error “detection” of cyclic codes error “detection” is easy for cyclic codes u C u (in a polynomial representation) is divided by G(x) 17 G(x)G(x) = 0... no error 0... error ÷ divisor dividend quotient remainder received u no need of parity check matrix, division circuit suffices encoding and error detection can share one division circuit reduces the cost of realization Cyclic Redundancy Check (CRC)... used in many communication systems
18
error “correction” of cyclic codes general algorithm for all cyclic codes error-trapping decoder [Kasami 61] 18 Tadao Kasami 1930-2007 E. Berlekamp, 1940- J. L. Massey, 1934-2013 I. Reed, 1923-2012 (left) and G. Solomon, 1930-1996 special algorithms for special cyclic codes Berlekamp-Massey algorithm for... BCH (Bose-Chaudhuri-Hocquenghem) codes Reed-Solomon Codes
19
intermission and convolutional code 19
20
the channel and modulation We have considered “digital channels”. At the physical-layer, almost all channels are continuous. digital channel=modulator + continuous channel + demodulator 20 modulator ( 変調器 ) demodulator ( 復調器 ) a naive demodulator translates the waveform to 0 or 1 we are losing something... 0010 continuous (analogue)
21
“more informative” demodulator From the viewpoint of error correction, the waveform contains more information than the binary output of the demodulator. demodulators with multi-level output can help error correction. 21 0 1 definitely 0 maybe 0 definitely 1 maybe 1 to make use of this multi-level demodulator, the decoding algorithm must be able to handle multi-level inputs
22
hard-decision vs. soft-decision hard-decision decoding the input to the decoder is binary (0 or 1) decoding algorithms discussed so far are hard-decision type soft-decision decoding the input to the decoder can have three or more levels the “check matrix and syndrome” approach does not work the use of polynomials is not obvious complicated, but should have more power 22
23
soft-decision decoding as optimization problem outputs of the demodulator 0 + (definitely 0), 0 - (maybe 0), 1 - (maybe 1), 1 + (definitely 1) code C = {00000, 01011, 10101, 11110} for a received vector 0 - 0 + 1 + 0 - 1 -, find a codeword which minimizes the penalty. 23 received 0 + 0 - 1 - 1 + penalty of “0” 0 1 2 3 penalty of “1” 3 2 1 0 (hard-decision... penalty = Hamming distance) 0-0- 0+0+ 1+1+ 0-0- 1-1- 00000 103127 r c0c0 ++++= 10101 200114 ++++= c2c2 = = smaller penalty, more likely
24
algorithms for the soft-decision decoding We just formalized the problem... how can we solve it? by exhaustive search?... not practical for codes with many codewords by matrix operation?... need to solve an integer programming, NP-hard by approximation?... yes, this is one practical approach anyway... design a special code for which soft-decision decoding is not “too difficult” convolutional code ( 畳み込み符号 ) 24
25
convolutional codes the codes we studied so far... block codes a block of k-bit data is encoded to a codeword of length n the encoding is done independently for block to block convolutional codes encoding is done in a bit-by-bit manner previous inputs are stored in shift-registers in the encoder, and affects future encoding 25 input data combinatorial logic encoder outputs
26
encoding of a convolutional code at the beginning, the contents of registers are all 0 when a data bit is given, the encoder outputs several bits, and the contents of registers are shifted by one-bit after encoding, give 0’s until all registers are filled by 0 26 r3r3 r2r2 r1r1 encoder example constraint length = 3 ( = # of registers) the output is constrained by three previous input bits
27
encoding example (1) to encode 1101... 27 000 1 1 1 001 1 1 0 011 0 0 1 110 1 1 0 give additional 0’s to push-out 1’s in the register...
28
encoding example (2) 28 101 0 0 0 010 0 0 0 110 1 1 0 100 0 0 1 the output is 11 10 01 10 00 00 01
29
encoder as a finite-state machine An encoder with k registers has 2 k internal states. An input causes a state transition, accompanied with outputs 29 r2r2 r1r1 input output internal state = (r 2, r 1 ) (0,0) (0,1) (1,0) (1,1) (r 2, r 1 ) 0 / 00 1 / 11 0 / 00 1 / 10 0 / 01 1 / 10 input / output constraint: initial state = final state =
30
encoding = state transition 30 00 1 1 1 01 1 1 1 11 0 0 1 10 0 0 1 input 0 input 1 00 11 00 10 01 10 00 11 00 10 01 10 00 11 00 10 01 10 00 11 00 10 01 10 00
31
at the receiver’s end the receiver knows... the state diagram the initial state and the final state the transmitted sequence but corrupted by errors 31 encoderreceiver 01001...01100... errors to correct errors = to estimate the most-likely transition
32
trellis diagram expand the state diagram to the time axis... 32 0 1 2 3 0 1 2 3 00 11 10 01 11 00 10 01 0 1 2 3 0 1 2 3 0 1 2 3 trellis diagram time 01234 initial statefinal state input 0 input 1 00 11 00 10 01 10
33
trellis diagram and code 33 possible encoding sequences = paths connecting the initial state and the final state the transmitted sequence = the path with the minimum penalty 10 0 1 2 3 0 1 2 3 00 11 10 01 11 00 01 0 1 2 3 0 1 2 3 0 1 2 3 error correction = shortest path problem
34
Viterbi algorithm given a received sequence... the demodulator defines penalties for symbols at each position the penalties are assigned to edges of the trellis diagram find the path with the minimum penalty using a good algorithm Viterbi algorithm Dijkstra algorithm over a trellis diagram recursive width-first search 34 Andrew Viterbi 1935- s 0,0 pApA pBpB qAqA qBqB the minimum penalty of this state is min (p A +q A, p B +q B )
35
soft-decision decoding for convolutional codes the complexity of Viterbi algorithm ≈ the size of the trellis diagram for convolutional codes (with constraint length k): the size of trellis ≈ 2 k ×data length... manageable we can extract 100% performance of the code for block codes: the size of trellis ≈ 2 data length... too large it is difficult to extract the full performance 35 block code high potential, but difficult to use performance convolutional code moderated potential, full power available
36
summary cyclic codes scalable, and have good mathematical structure some good codes have been discovered error-correction is not straight-forward convolutional codes soft-decision decoding is practically realizable no good algorithm for constructing good codes design in the “trial-and-error” manner with computer still used widely, but “better” codes are studied recently... 36
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.