Information and Coding Theory Linear Codes. Groups, fields and vector spaces - a brief survey. Codes defined as vector subspaces. Dual codes. Juris Viksna,

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

5.4 Basis And Dimension.
Chapter 4 Euclidean Vector Spaces
Applied Informatics Štefan BEREŽNÝ
Information and Coding Theory
1.  Detailed Study of groups is a fundamental concept in the study of abstract algebra. To define the notion of groups,we require the concept of binary.
4 4.3 © 2012 Pearson Education, Inc. Vector Spaces LINEARLY INDEPENDENT SETS; BASES.
Computer Graphics Recitation 5.
Matrices and Systems of Equations
Review of Matrix Algebra
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
Orthogonality and Least Squares
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
Linear Equations in Linear Algebra
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Stats & Linear Models.
1 Operations with Matrice 2 Properties of Matrix Operations
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Linear Codes.
Systems of Linear Equation and Matrices
Chap. 2 Matrices 2.1 Operations with Matrices
Information and Coding Theory
Chapter 5: The Orthogonality and Least Squares
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
Great Theoretical Ideas in Computer Science.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Section 2.3 Properties of Solution Sets
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
ADVANTAGE of GENERATOR MATRIX:
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Matrices and Determinants
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Linear Algebra Chapter 2 Matrices.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Some bounds on code parameters. Hemming and Golay codes. Syndrome.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Chapter 6- LINEAR MAPPINGS LECTURE 8 Prof. Dr. Zafer ASLAN.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Extending a displacement A displacement defined by a pair where l is the length of the displacement and  the angle between its direction and the x-axix.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Information and Coding Theory
MAT 322: LINEAR ALGEBRA.
Information and Coding Theory
Chapter 1 Linear Equations and Vectors
Systems of First Order Linear Equations
Quantum One.
Chapter 3 Linear Algebra
Math review - scalars, vectors, and matrices
Presentation transcript:

Information and Coding Theory Linear Codes. Groups, fields and vector spaces - a brief survey. Codes defined as vector subspaces. Dual codes. Juris Viksna, 2016

Codes – how to define them? In most cases it would be natural to use binary block codes that maps input vectors of length k to output vectors of length n. For example:   etc. Thus we can define code as an injective mapping from vector space V with dimension k to vector space W with dimension n. Such definition essentially is used in the original Shanon’s theorem. VW

Codes – how to define them? Simpler to define and use are linear codes that can be defined by multiplication with matrix of size k  n (called generator matrix). What should be the elements of vector spaces V and W? In principle in most cases it will be sufficient to have just 0-s and 1-s, however, to define vector space in principle we need a field – an algebraic system with operations “+” and “  ” defined and having similar properties as we have in ordinary arithmetic (think of real numbers). Field with just “0” and “1” may look very simple, but it turns out that to get some real progress we will need more complicated fields, just that elements of these fields themselves will be regarded as (most often) binary vectors.

Groups - definition Consider set G and binary operator +. Definition Pair (G,+) is a group, if there is e  G such that for all a,b,c  G: 1) a+b  G 2) (a+b)+c = a+(b+c) 3) a+e = a and e+a = a 4) there exists inv(a) such that a+ inv(a)= e and inv(a)+a = e 5) if additionally a+b = b+a, group is commutative (Abelian) If group operation is denoted by “+” then e is usually denoted by 0 and inv(a) by  a. If group operation is denoted by “  ” hen e is usually denoted by 1 and inv(a) by a  1 (and a  b are usually written as ab). It is easy to show that e and inv(a) are unique.

Groups - definition Definition Pair (G,+) is a group, if there is e  G such that for all a,b,c  G: 1) a+b  G 2) (a+b)+c = a+(b+c) 3) a+e = a and e+a = a 4) there exists inv(a) such that a+ inv(a)= e and inv(a)+a = e Examples: (Z,+), (Q,+), (R,+) (Q  {0},  ), (R  {0},  ) (but not (Z  {0},  )) (Z 2,+), (Z 3,+), (Z 4,+)(Z 2  {0},  ), (Z 3  {0},  )(but not (Z 4  {0},  )) A simple non-commutative group:  x,y , where x: abc  cab (  rotation) and y: abc  acb (  inversion).

Groups - definition (H,+) is a subgroup of (H,+) if H  G and (H,+) is a group. H<G - notation that H is a subgroup of G.  x 1, x 1, ,x k  - a subgroup of G generated by x 1, x 1, ,x k  G. o(G) - number of elements of group G. For first few lectures we just need to remember group definition. Other facts will become important when discussing finite fields. We will consider only commutative and finite groups!

Lagrange theorem Theorem If H < G then o(H) | o(G). Proof For all a  G consider sets aH = {ah | h  H} (these are called cosets). All elements of a coset aH are distinct and |aH| = o(H). Each element g  G belongs to some coset aH (for example g  gH). Two cosets aH and bH are either identical or disjoint. Thus G is a union of disjoint cosets, each having o(H) elements, hence o(H) | o(G).

Fields - definition Consider set F and binary operators “+” and “  ”. Definition Triple (F,+,  ) is a field, if 0,1  F such that for all a,b,c,d  F and d  0: 1) a+b  F and a  b  F 2) a+b=b+a and a  b= b  a 3)(a+b)+c=a+(b+c) and (a  b)  c=a  (b  c) 4) a+0=a and a  1 = a 5) there exist  a,d  1  F such that a+(  a)= 0 and d  d  1 =1 6) a  (b+c)=a  b+b  c We can say that F is a field if both (F,+) and (F  {0},  ) are commutative groups and operators “+” and “  ” are linked with property (6). Examples: (Q,+,  ), (R,+,  ), (Z 2,+,  ), (Z 3,+,  )(but not (Z 4,,+,  ))

Finite fields - some examples [Adapted from V.Pless]

Note that we could use alternative notation and consider the field elements as 2-dimensional binary vectors (and usual vector addition corresponds to operation “+” in field!) : 0=(0 0), 1=(0 1),  =(1 0),  =(1 1). Finite fields - some examples [Adapted from V.Pless]

Vector spaces - definition What we usually understand by vectors? In principle we can say that vectors are n-tuples of the form: (x 1,x 2, ,x n ) and operations of vector addition and multiplication by scalar are defined and have the following properties: (x 1,x 2, ,x n )+(y 1,y 2, ,y n )=(x+y 1,x+y 2, ,x+y n ) a  (x 1,x 2, ,x n )=(a  x 1,a  x 2, ,a  x n ) The requirements actually are a bit stronger – elements a and x i should come from some field F. We might be able to live with such a definition, but then we will link a vector space to a unique and fixed basis and often this will be technically very inconvenient.

Vector spaces - definition Let (V,+) be a commutative group with identity element 0, let F be a field with multiplicative identity 1 and let “  ” be an operator mapping F  V to V. Definition 4-tuple (V,F,+,  ) is a vector space if (V,+) is a commutative group with identity element 0 and for all u,v  V and all a,b  F: 1) a  (u+v)=a  u+a  v 2) (a+b)  v=a  v+b  v 3) a  (b  v)=(a  b)  v 4) 1  v=v The requirement that (V,+) is a group is used here for brevity of definition – the structure of (V,+) will largely depend from F. Note that operators “+” and “  ” are “overloaded” – the same symbols are used for field and vector space operators.

Vector spaces - definition Definition 4-tuple (V,F,+,  ) is a vector space if (V,+) is a commutative group with identity element 0 and for all u,v  V and all a,b  F: 1) a  (u+v)=a  u+a  v 2) (a+b)  v=a  v+b  v 3) a  (b  v)=(a  b)  v 4) 1  v=v Usually we will represent vectors as n-tuples of the form (x 1,x 2, ,x n ), however such representations will not be unique and will depend from a particular basis of vector space, which we will chose to use (but 0 will always be represented as n-tuple of zeroes (0,0, ,0)).

Vector spaces - some terminology A subset S of vectors from vector space V is a subspace of V if S is itself a vector space. We denote this by S ⊑ V. A linear combination of v 1, ,v k  V (V being defined over a field F) is a vector of the form a 1 v 1 +  +a k v k where a i  F. For given v 1, ,v k  V a set of all linear combinations of these vectors are denoted by  v 1, ,v k . It is easy to show that  v 1, ,v k  ⊑ V. We also say that vectors v 1, ,v k span the subspace  v 1, ,v k . A set of vectors v 1, ,v k is (linearly) independent if a 1 v 1 +  +a k v k  0 for all a 1, ,a k  F such that a i  0 for at least one index i.

Independent vectors - some properties A set of vectors v 1, ,v k is (linearly) independent if a 1 v 1 +  +a k v k  0 for all a 1, ,a k  F such that a i  0 for at least one index i. Assume that vectors v 1,v 2,v 3, ,v k are independent. Then independent will be also the following: any permutation of vectors v 1,v 2,v 3, ,v k (this doesn’t change the set :) av 1,v 2,v 3, ,v k, if a  0 v 1 +v 2,v 2,v 3, ,v k Moreover, these operations do not change subspace spanned by initial vectors  v 1, ,v k .

Vector spaces - some results Theorem 1 If v 1, ,v k  V span the entire vector space V and w 1, ,w r  V is an independent set of vectors then r  k. Proof Suppose this is not the case and r  k+1. Since  v 1, ,v k  = V we have: w 1 =a 11 v 1 +  +a 1k v k w 2 =a 21 v 1 +  +a 2k v k w k =a k1 v 1 +  +a kk v k w k+1 =a (k+1)1 v 1 +  +a (k+1)k v k We can assume that a 11 ≠ 0 (at least some scalar should be non-zero due to independence of w i 's).

Vector spaces - some results Thus, we have b 1 w 1 = v 1 +  +a 1k v k for some non-zero b 1. Subtracting the value b 1 w 1 a i1 from all w i 's with i >1 gives us a new system of equations with independent set of k+1 vectors on left side. By repeating this process k+1 time by starting with the i-th equation in each iteration we end up with set of k+1 independent vectors, where the i-th vector is expressed as a sum of no more than k+1–i v i 's. However, this means that k+1-st vector will be 0 and this contradicts that set of k+1 vectors is independent.

Vector spaces - basis Theorem 2 If two finite sets of independent vectors span a space V, then there are the same number of vectors in each set. Proof Let k be a number of vectors in the first set and r the number of vectors in the second set. By Theorem 1 we have k  r and r  k. We will be interested in vector spaces that are spanned by a finite number of vectors, so we assume this from now. We say that a set of vectors v 1, ,v k  V is a basis of V if they are independent and span V.

Vector spaces - basis Theorem 3 Let V be a vector space over a field F. Then the following hold: 1) V has a basis. 2) Any two bases of V contain the same number of vectors. 3) If B is a basis of V then every vector in V has a unique representation as a linear combination of vectors in B. Proof 1.Let v 1  V and v 1 ≠ 0. If V =  v 1  we are finished. If not, there is a vector v 2  V  v 1  and v 2 ≠ 0. If V =  v 1,v 2  we are finished. If not, continue until we obtain a basis for V. The process must terminate by our assumption that V is spanned by a finite number k of vectors and the result given by Theorem 1 that the basis cannot have more than k vectors.

Vector spaces - basis Theorem 3 Let V be a vector space over a field F. Then the following hold: 1) V has a basis. 2) Any two bases of V contain the same number of vectors. 3) If B is a basis of V then every vector in V has a unique representation as a linear combination of vectors in B. Proof 2.This is a direct consequence from Theorem 2. 3.Since the vectors in B span V, any v  V can be written as v = a 1 b a k b k. If we also have v = c 1 b c k b k then a 1 b a k b k = c 1 b c k b k and, since b i 's are independent, a i = c i for all i.

Vector spaces - dimension The dimension of a vector space V, denoted by dim V, is the number of vectors in any basis of V. By Theorem 3 the dimension of V is well defined. Assume that set of vectors v 1, ,v k  V is independent and in some fixed basis b 1, ,b k  V we have representations v i = a i1 b a ik b k. Then independent will be also the following sets of vectors obtained from v 1, ,v k by: for some j,k swapping all a ij -s with a ik -s for some j and non-zero c  F replacing all a ij -s with ca ij -s (*) for some k replacing all a ij -s with a ij +a ik -s (**) Note however that these operations can change subspace spanned by initial vectors  v 1, ,v k . Independent vectors – few more properties

Vector spaces and rangs of matrices If M is a matrix whose elements are contained in a field F, then the row rank of M, denoted by rr(M), is defined to be dimension of the subspace spanned by rows of M. The column rank of M, denoted by rc(M), is defined to be dimension of the subspace spanned by columns of M. An n  n matrix A is called nonsingular if rr(A) = n. This means that the rows of A are linearly independent.

Vector spaces and rangs of matrices If M is a matrix, the following operations on its rows are called elementary row operations: 1)Permuting rows. 2)Multiplying a row by a nonzero scalar. 3)Adding a scalar multiple of one row to another row. Similarly, we can define the following to be elementary column operations: 4)Permuting columns. 5)Multiplying a column by a nonzero scalar. 6)Adding a scalar multiple of one column to another column. As we saw, none of these operations affects rr(M) or rc(M), although 1,2,3 and 4,5,6 correspondingly could change M column/row spaces.

Vector spaces - matrix row-eschelon form A matrix M is said to be in row-echelon form if, after possibly permuting its columns, M = where I is k  k identity matrix. Lemma If M is any matrix, then M can by applying elementary operations 1,2,3,4 it can be transformed into a matrix M in row echelon form.

Vector spaces - matrix row-eschelon form Lemma If M is any matrix, then M can by applying elementary operations 1,2,3,4 it can be transformed into a matrix M in row echelon form. Proof (sketch) Proceed iteratively in steps 1...k as follows: Step i Perform row permutation (if such exists) that places in i-th position row with a non-zero element in i-th column. Let it be row v. If there are no appropriate permutation, perform first column permutation that places non-zero column in i-th position. Add to each of other rows vector av, where a is chosen such that after the addition the row has value 0 in i-th position.

Vector spaces - matrix row-eschelon form Theorem The row rank rr(M) of matrix M, equals its column rank rc(M). Proof Transform M into M in row echelon form applying elementary operations 1,2,3,4. This doesn’t change either rr(M) or rc(M), it is also obvious that for matrix M we have rr(M) = rc(M) = k, where k is the size of identity matrix I. M =

Linear codes Message source EncoderReceiverDecoderChannel x = x 1,...,x k message x' estimate of message y = c + e received vector e = e 1,...,e n error from noise c = c 1,...,c n codeword Generally we will define linear codes as vector spaces – by taking C to be a k-dimensional subspace of some n-dimensional space V.

Codes and linear codes Let V be an n-dimensional vector space over a finite field F. Definition A code is any subset C  V. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Whilst this definition is largely “standard”, it doesn’t distinguish any particular basis of V. However the properties and parameters of code will vary greatly with selection of particular basis. So in fact we assume that V is given already with some fixed basis b 1, ,b n  V and will assume that all elements of V and C will be represented in this particular basis. At the same time we might be quite flexible in choosing a specific basis for C.

Codes and linear codes Let V be an n-dimensional vector space over a finite field F. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Example (choices of bases for V and code C): Basis of V (fixed): 001,010,100 Set of V elements:{ 000,001,010,011,100,101,110,111 } Set of C elements:{ 000,001,010,011 } 2 alternative bases for code C: 001, ,011 Essentially, we will be ready to consider alternative bases, but will stick to “main one” for representation of V elements.

Hamming code [7,4] What to do, if there are errors? - we assume that the number of errors is as small as possible - i.e. we can find the code word c (and the corresponding x) that is closest to received vector y (using Hamming distance) - consider vectors a = , b = and c = , -- if y is received, compute ya, yb and yc (inner products), e.g., for y = we obtain ya = 1, yb = 0 and yc = this represents a binary number (100 or 4 in example above) and we conclude that error is in 4 th digit, i.e. x = Easy, bet why this method work?

Vector spaces – dot (scalar) product Let V be a k-dimensional vector space over field F. Let b 1, ,b k  V be some basis of V. For a pair of vectors u,v  V, such that u=a 1 b a k b k and v=c 1 b c k b k their dot (scalar) product is defined by: u·v = a 1 ·c a k ·c k Thus operator “  ” maps V  V to F. Lemma For u,v,w  V and all a,b  F the following properties hold: 1) u·v = v·u. 2) (au+bv)·w = a(u·v)+b(v·w). 3) If u·v = 0 for all v in V, then u = 0. Note. The Lemma above can also be used as a more abstract definition of inner product. The question whether for an inner product defined in abstract way there will be a basis that will allow to compute it as a scalar product by formula above is somewhat tricky and may depend from particular vector space V.

Vector spaces – dot (scalar) product Let V be a k-dimensional vector space over field F. Let b 1, ,b k  V be some basis of V. For a pair of vectors u,v  V, such that u=a 1 b a k b k and v=c 1 b c k b k their dot (scalar) product is defined by: u·v = a 1 ·c a k ·c k Two vectors u and v are said to be orthogonal if u·v = 0. If C is a subspace of V then it is easy to see that the set of all vectors in V that are orthogonal to each vector in C is a subspace, which is called the space orthogonal to C and denoted by C . Theorem If C is a subspace of V, then dim C + dim C  = dim V. For us important and not too obvious result!

Vector spaces and linear transformations Definition Let V be a vector space over field F. Function f : V  V is called a linear transformation, if for all u,v  V and all a  F the following hold: 1)af(u) = f(au). 2)f(u)+f(v) = f(u+v). The kernel of f is defined as ker f ={v  V | f(v) = 0}. The range of f is defined as range f ={f(v) | v  V}. It is quite obvious property of f linearity that vector sums and scalar products doesn't leave ker f or range f. Thus ker f ⊑ V and range f ⊑ V.

Vector spaces and linear transformations range f ker f 0 v  0v  0 Proof? Chose some basis u 1, ,u k of ker f and some basis w 1, ,w n of range f. Then try to show that u 1, ,u k, v 1, ,v n is a basis for V, where w i = f(v i ) (vectors v i may not be uniquely defined, but is sufficient to chose arbitrary pre-images of w i -s). Theorem (rank-nullity theorem) dim (ker f) + dim (range f) = dim V.

Rank-nullity theorem [Adapted from R.Milson] Theorem dim (ker f) + dim (range f) = dim V.

Rank-nullity theorem [Adapted from R.Milson]

Dimensions of orthogonal vector spaces CCC 0 Proof? We could try to reduce this to “similarly looking” equality dim V = dim (ker f) + dim (range f). However how we can define a linear transformation from dot product? Theorem If C is a subspace of V, then dim C + dim C  = dim V.

Dimensions of orthogonal vector spaces Proof However how we can define a linear transformation from dot product? Let u 1, ,u k be some basis of C. We define transformation f as follows: for all v  V: f(v) = (vu 1 ) u 1 +  + (vu k ) u k Note that vu i  F, thus f(v)  C. Therefore we have: ker f = C  (this directly follows form definition of C  ) range f = C (this follows form definition of f) Thus from rank-nullity theorem: dim C + dim C  = dim V. Theorem If C is a subspace of V, then dim C + dim C  = dim V.

Linear codes Message source EncoderReceiverDecoderChannel x = x 1,...,x k message x' estimate of message y = c + e received vector e = e 1,...,e n error from noise c = c 1,...,c n codeword Generally we will define linear codes as vector spaces – by taking C to be a k-dimensional subspace of some n-dimensional space V.

Codes and linear codes Let V be an n-dimensional vector space over a finite field F. Definition A code is any subset C  V. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Whilst this definition is largely “standard”, it doesn’t distinguish any particular basis of V. However the properties and parameters of code will vary greatly with selection of particular basis. So in fact we assume that V is given already with some fixed basis b 1, ,b n  V and will assume that all elements of V and C will be represented in this particular basis. At the same time we might be quite flexible in choosing a specific basis for C.

Codes and linear codes Let V be an n-dimensional vector space over a finite field F together with some fixed basis b 1, ,b n  V. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Definition The weight wt(v) of a vector v  V is a number of nonzero components of v in its representation as a linear combination v = a 1 b a n b n. Definition The distance d(v,w) between vectors v,w  V is a number of distinct components in their representation in given basis. Definition The minimum weight of code C ⊑ V is defined as min v  C,v  0 wt(v).

Codes and linear codes Theorem Linear (n,k,d) code can correct any number of errors not exceeding t =  (d  1)/2 . Proof The distance between any two codewords is at least d. So, if the number of errors is smaller than d/2 then the closest codeword to the received vector will be the transmitted one However a far less obvious problem: how to find which codeword is the closest to received vector?

Coding theory - the main problem A good (n,k,d) code has small n, large k and large d. The main coding theory problem is to optimize one of the parameters n, k, d for given values of the other two.

Generator matrices Definition Consider (n,k) code C ⊑ V. G is a generator matrix of code C, if C = {vG | v  V} and all rows of G are independent. It is easy to see that generator matrix exists for any code – take any matrix G rows of which are vectors v 1, ,v k (represented as n-tuples in the initially agreed basis of V) that form a basis of C. By definition G will be a matrix of size k  n. Obviously there can be many different generator matrices for a given code. For example, these are two alternative generator matrices for the same (4,3) code:

Equivalence of codes Definition Codes C 1,C 2 ⊑ V. are equivalent, if a generator matrix G 2 of C 2 can be obtained from a generator matrix G 1 of C 1 by a sequence of the following operations: 1) permutation of rows 2) multiplication of a row by a non-zero scalar 3) addition of one row to another 4) permutation of columns 5) multiplication of a column by a non-zero scalar Note that operations 1-3 actually doesn’t change the code C 1. Applying operations 4 and 5 C 1 could be changed to a different subspace of V, however the weight distribution of code vectors remains the same. In particular, if C 1 is (n,k,d) code so is C 2. In binary case vectors of C 1 and C 2 would differ only by permutation of positions.

Generator matrices Definition A generator matrix G of (n,k) code C ⊑ V is said to be in standard form if G = (I,A), where I is k  k identity matrix. Theorem For code C ⊑ V there is an equivalent code C that has a generator matrix in standard form. Proof We have already shown that each matrix can be transformed in row- echelon form by applying the same operations that define equivalent codes. Since for (n,k) code generator matrix must have rank k, we obtain that in this case I should be k  k identity matrix (i.e. we can’t have rows with all zeroes).

Dual codes Definition Consider code C ⊑ V. A dual or orthogonal code of C is defined as C  = {v  V |  w  C: vw = 0}. It is easy check that C  ⊑ V, i.e. C  is a code. Note that actually this is just a re-statement of definition of orthogonal vector spaces we have already seen. Remember theorem we have proved shortly ago: If C is a subspace of V, then dim C + dim C  = dim V. Thus, if C is (n,k) code then C  is (n  k,k) code and vice versa. There are codes that are self-dual, i.e. C = C .

Dual codes - some examples For the (n,1) -repetition code C, with the generator matrix G = (1 1 … 1) the dual code C  is (n, n  1) code with the generator matrix G , described by: 

Dual codes - some examples [Adapted from V.Pless]

Dual codes – parity checking matrices Definition Let code C ⊑ V and let C  be its dual code. A generator matrix H of C  is called a parity checking matrix of C. Theorem If k  n generator matrix of code C ⊑ V is in standard form if G = (I,A) then (k  n)  n matrix H = (  A T,I) is a parity checking matrix of C. Proof It is easy to check that any row of G is orthogonal to any row of H (each dot product is a sum of only two non-zero scalars with opposite signs). Since dim C + dim C  = dim V, i.e. k + dim C  = n we have to conclude that H is a generator matrix of C . Note that in binary vector spaces H = (  A T,I) = (A T,I).

Dual codes – parity checking matrices Theorem If k  n generator matrix of code C ⊑ V is in standard form if G = (I,A) then (k  n)  n matrix H = (  A T,I) is a parity checking matrix of C. So, up to the equivalence of codes we have an easy way to obtain a parity check matrix H from a generator matrix G in standard form and vice versa. Example of generator and parity check matrices in standard form: