struggle ! two practical classes of channel (error- correcting) codes cyclic codes (巡回符号) a subclass of linear codes linear-time encoding and error detection (not correction) convolutional codes (畳み込み符号) different principle from linear codes soft-decision optimal decoding is available both codes widely used today expected to be replaced by “next-generation” codes 1
what’s wrong with linear codes? 2
channel codes evolution 3 general channel (error-correcting) codes linear codes (linear block codes) convolutional codes cyclic codes LDPC codesturbo codes BCH Reed-Solomon Hamming Golay more structure... more efficiency... more power... next class this class (& next class)
cyclic codes 4 codes linear codes cyclic codes
preliminary (1) Binary vectors can be written as binary polynomials (多項式) x 4 + x 3 + x 2 + x + 1 addition (=subtraction) ⇒ XOR x 4 + x 3 + x x 3 + x + 1 +) x 4 + x 2 + x ) multiplication ⇒ (unnamed operation) x 4 + x 3 + x x 3 + x + 1 ×) x 4 + x 3 + x x 5 + x 4 + x 3 + x x 7 + x 6 + x 5 + x 3 x 7 + x 6 + x 3 + x 2 + x ×) multiplication with x m = left-shift of m bits
preliminary (2) 6 division ⇒ (unnamed operation) x 6 + x 4 x 4 + x 3 + x 2 + 1) x 2 + x + 1 x 6 + x 5 + x 4 + x 2 x 5 + x 2 x 5 + x 4 + x 3 + x x 4 + x 3 + x 2 + x x 4 + x 3 + x x ) the division circuit is easily implemented by a shift register
1. store p(x) to the registers 2. if MSB = 1, then AND gates are activated, and registers are XOR’ed with q(x) 3. left-shift, and go to step 1 division circuit (1) divide p(x) = x 6 + x 4 by q(x) = x 4 + x 3 + x 2 + 1: q(x)q(x) p(x)p(x) quotient ( 商 ) remainder ( 剰余 ) MSB divisor dividend
division circuit (2) )
definition of cyclic codes ) G(x)G(x) x 7 +1
construction of a cyclic code Step 1: choose an degree-m polynomial G(x) that divides x n + 1. Step 2: C = {multiples of G(x) with degree < n} n = 7, m = 4: G(x) = x 4 + x 3 + x G(x)G(x) (x 2 +x+0)× C=C= G(x)G(x) (x 2 +x+1)× G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) G(x)G(x) =
properties of cyclic codes (1) Anyway, we have defined a set of vectors, but is it a linear code? Lemma: a cyclic code is a linear code proof: Show that c 1 +c 2 C for any c 1, c 2 C; c 1 C c 1 = f 1 (x)G(x) c 2 C c 2 = f 2 (x)G(x) c 1 +c 2 = (f 1 (x)+f 2 (x)) G(x) C 11
properties of cyclic codes (2) Lemma: if (a n-1, a n-2,..., a 0 ) C, then (a n-2,..., a 0, a n-1 ) C proof: Let W(x) = a n-1 x n a 0 and W’(x) = a n-2 x n a 0 x + a n-1. W(x) is a multiple of G(x), because (a n-1, a n-2,..., a 0 ) C W’(x) = a n-2 x n a 0 x + a n-1 = a n-1 x n + a n-2 x n a 0 x + a n-1 + a n-1 x n = xW(x) + a n-1 (x n + 1) 12 multiple of G(x)multiple of G(x)... construction step 1 W’(x) is a multiple of G(x), and (a n-2,..., a 0, a n-1 ) C. A cyclic code C is closed for a cyclic shift.
three approaches for encoding three approaches for an encoding procedure: matrix approach use a generator matrix... no advantage of cyclic codes multiplication approach codeword = (information bits) × G(x) the code not systematic (cf. p.10) division approach slightly complicated (for me) make the code systematic easily implemented by shift registers 13
3-step encoding by division 1. A(x) = polynomial of information bits 2. B(x) = remainder of A(x)x m divided by G(x) 3. codeword = A(x)x m + B(x) encode 011, with n = 7, k = 3, m = 4, G(x) = x 4 + x 3 + x A(x) = x A(x)x 4 = x 5 + x 4 = x(x 4 + x 3 + x 2 + 1) + (x 3 + x), and B(x) = x 3 + x 3. A(x)x 4 + B(x)= x 5 + x 4 + x 3 + x, the codeword = A(x)A(x) (not necessary) G(x)G(x) B(x)B(x) x4x4 ÷ divisor dividend quotient remainder W(x) = A(x)x 4 + B(x)
did we really make “encoding”? simple question: Is A(x)x m + B(x) really a codeword? Is A(x)x m + B(x) divided by G(x)? Yes, note that... B(x) is a remainder of A(x)x m in the binary world, A(x)x m + B(x) = A(x)x m – B(x) “A(x)x m – B(x)” = “removing the remainder” if A(x)x m – B(x) is divided by G(x), then there is no remainder... 15
example n = 7, k = 3, m = 4, G(x) = x 4 + x 3 + x data A(x) x 2 +x+0 x 2 +x+1 A(x)x B(x) x 3 +x 2 +x+0 x 3 +x 2 +x+1 A(x)x m + B(x) systematic code encoder ≈ division circuit O(n) < O(n 2 ) of matrix operation
error “detection” of cyclic codes error “detection” is easy for cyclic codes u C u (in a polynomial representation) is divided by G(x) 17 G(x)G(x) = 0... no error 0... error ÷ divisor dividend quotient remainder received u no need of parity check matrix, division circuit suffices encoding and error detection can share one division circuit reduces the cost of realization Cyclic Redundancy Check (CRC)... used in many communication systems
error “correction” of cyclic codes general algorithm for all cyclic codes error-trapping decoder [Kasami 61] 18 Tadao Kasami E. Berlekamp, J. L. Massey, I. Reed, (left) and G. Solomon, special algorithms for special cyclic codes Berlekamp-Massey algorithm for... BCH (Bose-Chaudhuri-Hocquenghem) codes Reed-Solomon Codes
intermission and convolutional code 19
the channel and modulation We have considered “digital channels”. At the physical-layer, almost all channels are continuous. digital channel=modulator + continuous channel + demodulator 20 modulator ( 変調器 ) demodulator ( 復調器 ) a naive demodulator translates the waveform to 0 or 1 we are losing something continuous (analogue)
“more informative” demodulator From the viewpoint of error correction, the waveform contains more information than the binary output of the demodulator. demodulators with multi-level output can help error correction definitely 0 maybe 0 definitely 1 maybe 1 to make use of this multi-level demodulator, the decoding algorithm must be able to handle multi-level inputs
hard-decision vs. soft-decision hard-decision decoding the input to the decoder is binary (0 or 1) decoding algorithms discussed so far are hard-decision type soft-decision decoding the input to the decoder can have three or more levels the “check matrix and syndrome” approach does not work the use of polynomials is not obvious complicated, but should have more power 22
soft-decision decoding as optimization problem outputs of the demodulator 0 + (definitely 0), 0 - (maybe 0), 1 - (maybe 1), 1 + (definitely 1) code C = {00000, 01011, 10101, 11110} for a received vector , find a codeword which minimizes the penalty. 23 received penalty of “0” penalty of “1” (hard-decision... penalty = Hamming distance) r c0c0 ++++= = c2c2 = = smaller penalty, more likely
algorithms for the soft-decision decoding We just formalized the problem... how can we solve it? by exhaustive search?... not practical for codes with many codewords by matrix operation?... need to solve an integer programming, NP-hard by approximation?... yes, this is one practical approach anyway... design a special code for which soft-decision decoding is not “too difficult” convolutional code ( 畳み込み符号 ) 24
convolutional codes the codes we studied so far... block codes a block of k-bit data is encoded to a codeword of length n the encoding is done independently for block to block convolutional codes encoding is done in a bit-by-bit manner previous inputs are stored in shift-registers in the encoder, and affects future encoding 25 input data combinatorial logic encoder outputs
encoding of a convolutional code at the beginning, the contents of registers are all 0 when a data bit is given, the encoder outputs several bits, and the contents of registers are shifted by one-bit after encoding, give 0’s until all registers are filled by 0 26 r3r3 r2r2 r1r1 encoder example constraint length = 3 ( = # of registers) the output is constrained by three previous input bits
encoding example (1) to encode give additional 0’s to push-out 1’s in the register...
encoding example (2) the output is
encoder as a finite-state machine An encoder with k registers has 2 k internal states. An input causes a state transition, accompanied with outputs 29 r2r2 r1r1 input output internal state = (r 2, r 1 ) (0,0) (0,1) (1,0) (1,1) (r 2, r 1 ) 0 / 00 1 / 11 0 / 00 1 / 10 0 / 01 1 / 10 input / output constraint: initial state = final state =
encoding = state transition input 0 input
at the receiver’s end the receiver knows... the state diagram the initial state and the final state the transmitted sequence but corrupted by errors 31 encoderreceiver errors to correct errors = to estimate the most-likely transition
trellis diagram expand the state diagram to the time axis trellis diagram time initial statefinal state input 0 input
trellis diagram and code 33 possible encoding sequences = paths connecting the initial state and the final state the transmitted sequence = the path with the minimum penalty error correction = shortest path problem
Viterbi algorithm given a received sequence... the demodulator defines penalties for symbols at each position the penalties are assigned to edges of the trellis diagram find the path with the minimum penalty using a good algorithm Viterbi algorithm Dijkstra algorithm over a trellis diagram recursive width-first search 34 Andrew Viterbi s 0,0 pApA pBpB qAqA qBqB the minimum penalty of this state is min (p A +q A, p B +q B )
soft-decision decoding for convolutional codes the complexity of Viterbi algorithm ≈ the size of the trellis diagram for convolutional codes (with constraint length k): the size of trellis ≈ 2 k ×data length... manageable we can extract 100% performance of the code for block codes: the size of trellis ≈ 2 data length... too large it is difficult to extract the full performance 35 block code high potential, but difficult to use performance convolutional code moderated potential, full power available
summary cyclic codes scalable, and have good mathematical structure some good codes have been discovered error-correction is not straight-forward convolutional codes soft-decision decoding is practically realizable no good algorithm for constructing good codes design in the “trial-and-error” manner with computer still used widely, but “better” codes are studied recently... 36