Download presentation
Presentation is loading. Please wait.
Published byRaymond Fields Modified over 9 years ago
1
Information Theory Linear Block Codes Jalal Al Roumy
2
2 Hamming distance The intuitive concept of “closeness'' of two words is formalized through Hamming distance d (x, y) of words x, y. For two words (or vectors) x, y; d (x, y) = the number of symbols x and y differ. Example:d (10101, 01100) = 3,d (first, second, fifth) = 3 Properties of Hamming distance (1) d (x, y) = 0; iff x = y (2) d (x, y) = d (y, x) (3) d (x, z) ≤ d (x, y) + d (y, z) triangle inequality An important parameter of codes C is their minimal distance. d (C) = min {d (x, y) | x, y ε C, x ≠ y}, because it gives the smallest number of errors needed to change one codeword into another. Theorem Basic error correcting theorem (1) A code C can detect up to s errors if d (C) ≥ s + 1. (2) A code C can correct up to t errors if d (C) ≥ 2t + 1. Note – for binary linear codes d (C) = smallest weight W (C) of non-zero codeword,
3
3 Some notation Notation: Notation: An (n,M,d) - code C is a code such that n - is the length of codewords. M - is the number of codewords. d - is the minimum distance in C. Example: C 1 = {00, 01, 10, 11} is a (2,4,1)-code. C 2 = {000, 011, 101, 110} is a (3,4,2)-code. C 3 = {00000, 01101, 10110, 11011} is a (5,4,3)-code. Comment: A good (n,M,d) code has small n and large M and d.
4
4 Code Rate For q-nary (n,M,d)-code we define code rate, or information rate, R, by The code rate represents the ratio of the number of input data symbols to the number of transmitted code symbols. For a Hadamard code eg, this is an important parameter for real implementations, because it shows what fraction of the bandwidth is being used to transmit actual data. log2(n) = ln(n)/ln(2) Recall that
5
5 Equivalence of codes Definition Two q -ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: (a) a permutation of the positions of the code. (b) a permutation of symbols appearing in a fixed position. Let a code be displayed as an M ´ n matrix. To what correspond operations (a) and (b)? Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). Examples of equivalent codes Lemma Any q -ary (n,M,d) -code over an alphabet {0,1,…,q -1} is equivalent to an (n,M,d) -code which contains the all-zero codeword 00…0.
6
6 The main coding theory problem A good (n,M,d) -code has small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Notation: A q (n,d) is the largest M such that there is an q -nary (n,M,d) -code.
7
7 Introduction to linear codes
8
8 Linear Block Codes Information is divided into blocks of length k r parity bits or check bits are added to each block (total length n = k + r),. Code rate R = k/n Decoder looks for codeword closest to received vector (code vector + error vector) Tradeoffs between Efficiency Reliability Encoding/Decoding complexity
9
9 Linear Block Codes e H T = e H T e ) * H T =c H T e be the received message; c is the correct code and e is the error Let x = c Compute S = x * H T =( c If S is 0 then message is correct else there are errors in it, from common known error patterns the correct message can be decoded. The parity check matrix H is used to detect errors in the received code by using the fact that c * H T = 0 ( null vector) Generator matrix G Code Vector C Message vector m Parity check matrix H T Code Vector C Null vector 0 Operations of the generator matrix and the parity check matrix
10
10 Linear Block Codes Linear Block Code The block length C of the Linear Block Code is C = m G where m is the information codeword block length, G is the generator matrix. G = [I k | P] k × n, I is unit matrix. The parity check matrix H = [P T | I n-k ], where P T is the transpose of the matrix p.
11
11 Forming the generator matrix The generator matrix is formed from the list of codewords by ignoring the all zero vector and the linear combinations; eg
12
12 Equivalent linear [n,k]-codes Two k x n matrices generate equivalent linear codes over GF(q) if one matrix can be obtained from the other by a sequence of operations of the following types: (R1) permutation of rows (R2) multiplication of a row by a non-zero scaler (R3) Addition of a scaler multiple of one row to another (C1) Permutation of columns (C2) Multiplication of any column by a non-zero scaler The row operations (R) preserve the linear independence of the rows of the generator matrix and simply replace one basis by another of the same code. The column operations (C) convert the generator matrix to one for an equivalent code.
13
13 Transforming the generator matrix Transforming to the form G = [I k | P]
14
14 Encoding with the generator Codewords = message vector u x G For example, where
15
15 Parity-check matrix A parity check matrix H for an [n, k]-code C is and (n - k) x n matrix such that x. H T = 0 iff x C. A parity- check matrix for C is a generator matrix for the duel code C . If G = [I k | A] is the standard form generator matrix for an [n, k]-code C, then the parity-check matrix for C is H = [-A T | I n-k ]. A parity check matrix of the form [B | I n-k ] is said to be in standard form.
16
16 Decoding using Slepian matrix An elegant nearest-neighbour decoding scheme was devised by Slepian in 1960. every vector in V(n, q) in in some coset of C every coset contains exactly q k vectors two cosets are either disjoint or coincide
17
17 Syndrome decoding Suppose C is a q-ary [n, k]-code with the parity-check matrix H. For any vector y = V(n, q), the row vector S(y) = y H T is called the syndrome of y. Two vectors have the same syndromes iff they lie in the same coset.
18
18 Decoding procedure The rules:
19
19 Example
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.