Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

5.4 Basis And Dimension.
Chapter 4 Euclidean Vector Spaces
Error Control Code.
Information and Coding Theory
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
Orthogonality and Least Squares
Copyright © Cengage Learning. All rights reserved.
Linear Equations in Linear Algebra
DIGITAL COMMUNICATION Coding
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Linear Codes.
Information and Coding Theory
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
Chapter 5: The Orthogonality and Least Squares
Linear Algebra Chapter 4 Vector Spaces.
Chapter 2: Vector spaces
Information and Coding Theory Transmission over noisy channels. Channel capacity, Shannon’s theorem. Juris Viksna, 2015.
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
AN ORTHOGONAL PROJECTION
4-2 binary fields and binary vector spaces Special Thanks to Dr. Samir Al-Ghadhban & EE430 Students.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Great Theoretical Ideas in Computer Science.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
Hamming codes. Golay codes.
1 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 20 (2009)
ADVANTAGE of GENERATOR MATRIX:
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Perfect and Related Codes
Some Computation Problems in Coding Theory
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
INFORMATION THEORY Pui-chor Wong.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Some bounds on code parameters. Hemming and Golay codes. Syndrome.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Extending a displacement A displacement defined by a pair where l is the length of the displacement and  the angle between its direction and the x-axix.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Information and Coding Theory Linear Codes. Groups, fields and vector spaces - a brief survey. Codes defined as vector subspaces. Dual codes. Juris Viksna,
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Information and Coding Theory
Hamming codes. Golay codes.
Information and Coding Theory
Information and Coding Theory
II. Linear Block Codes.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Information Redundancy Fault Tolerant Computing
Linear Algebra Lecture 38.
II. Linear Block Codes.
Presentation transcript:

Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015

Transmission over noisy channel

Noisy channel In practice channels are always noisy (sometimes this could be ignored). There are several types of noisy channels one can consider. We will restrict attention to binary symmetric channels.

Shannon Channel Coding Theorem

Codes – how to define them? In most cases it would be natural to use binary block codes that maps input vectors of length k to output vectors of length n. For example:   etc. Thus we can define code as an injective mapping from vector space V with dimension k to vector space W with dimension n. Such definition essentially is used in the original Shanon’s theorem. VW

Codes – how to define them? We can define code as an injective mapping from “vector space” V with dimension k to “vector space” W with dimension n. Arbitrary mappings between vector spaces are hard either to explicitly define or to use (encode ort decode) in practice (there are almost 2 n2 k of them – already around for k=4 and n=7). Simpler to define and use are linear codes that can be defined by multiplication with matrix of size k  n (called generator matrix)  Shanon’s results hold also for linear codes.

Codes – how to define them? Simpler to define and use are linear codes that can be defined by multiplication with matrix of size k  n (called generator matrix). What should be the elements of vector spaces V and W? In principle in most cases it will be sufficient to have just 0-s and 1-s, however, to define vector space in principle we need a field – an algebraic system with operations “+” and “  ” defined and having similar properties as we have in ordinary arithmetic (think of real numbers). Field with just “0” and “1” may look very simple, but it turns out that to get some real progress we will need more complicated fields, just that elements of these fields themselves will be regarded as (most often) binary vectors.

What are good codes? Linear codes can be defined by their generator matrix of size k  n. Shanon’s theorem tells us that for a transmission channel with a bit error probability p and for an arbitrary small bit error probability p b we wish to achieve there exists codes with rates R = k/n that allows us to achieve p b as long as R<C(p). In general, however, the error rate could be different for different codewords, p b being an “average” value. We however will consider codes that are guaranteed to correct up to t errors for any of codewords.

What are good codes? We however will consider codes that are guaranteed to correct up to t errors for any of codewords – this is equivalent with minimum distance between codewords being d and t =  d  1  /2. Such codes will then be characterized by 3 parameters and will be referred to as (n,k,d) codes. For a given k we are thus interested: - to minimize n - to maximize d In most cases for fixed values n and k the larger values of d will give us lower bit error probability p b, although the computation of p b is not that straightforward and depends from a particular code. Note that one can completely “spoil” d value of good code with low p b by including in it a vector with weight 1

Vector spaces - definition What we usually understand by vectors? In principle we can say that vectors are n-tuples of the form: (x 1,x 2, ,x n ) and operations of vector addition and multiplication by scalar are defined and have the following properties: (x 1,x 2, ,x n )+(y 1,y 2, ,y n )=(x+y 1,x+y 2, ,x+y n ) a  (x 1,x 2, ,x n )=(a  x 1,a  x 2, ,a  x n ) The requirements actually are a bit stronger – elements a and x i should come from some field F. We might be able to live with such a definition, but then we will link a vector space to a unique and fixed basis and often this will be technically very inconvenient.

Vector spaces - definition Definition 4-tuple (V,F,+,  ) is a vector space if (V,+) is a commutative group with identity element 0 and for all u,v  V and all a,b  F: 1) a  (u+v)=a  u+a  v 2) (a+b)  v=a  v+b  v 3) a  (b  v)=(a  b)  v 4) 1  v=v Usually we will represent vectors as n-tuples of the form (x 1,x 2, ,x n ), however such representations will not be unique and will depend from a particular basis of vector space, which we will chose to use (but 0 will always be represented as n-tuple of zeroes (0,0, ,0)).

Groups - definition Consider set G and binary operator +. Definition Pair (G,+) is a group, if there is e  G such that for all a,b,c  G: 1) a+b  G 2) (a+b)+c = a+(b+c) 3) a+e = a and e+a = a 4) there exists inv(a) such that a+ inv(a)= e and inv(a)+a = e 5) if additionally a+b = b+a, group is commutative (Abelian) If group operation is denoted by “+” then e is usually denoted by 0 and inv(a) by  a. If group operation is denoted by “  ” hen e is usually denoted by 1 and inv(a) by a  1 (and a  b are usually written as ab). It is easy to show that e and inv(a) are unique.

Vector spaces – dot (scalar) product Let V be a k-dimensional vector space over field F. Let b 1, ,b k  V be some basis of V. For a pair of vectors u,v  V, such that u=a 1 b a k b k and v=c 1 b c k b k their dot (scalar) product is defined by: u·v = a 1 ·c a k ·c k Thus operator “  ” maps V  V to F. Lemma For u,v,w  V and all a,b  F the following properties hold: 1) u·v = v·u. 2) (au+bv)·w = a(u·v)+b(v·w). 3) If u·v = 0 for all v in V, then u = 0.

Vector spaces – dot (scalar) product Let V be a k-dimensional vector space over field F. Let b 1, ,b k  V be some basis of V. For a pair of vectors u,v  V, such that u=a 1 b a k b k and v=c 1 b c k b k their dot (scalar) product is defined by: u·v = a 1 ·c a k ·c k Two vectors u and v are said to be orthogonal if u·v = 0. If C is a subspace of V then it is easy to see that the set of all vectors in V that are orthogonal to each vector in C is a subspace, which is called the space orthogonal to C and denoted by C .

Linear block codes Message source EncoderReceiverDecoderChannel x = x 1,...,x k message x' estimate of message y = c + e received vector e = e 1,...,e n error from noise c = c 1,...,c n codeword Generally we will define linear codes as vector spaces – by taking C to be a k-dimensional subspace of some n-dimensional space V.

Linear block codes Let V be an n-dimensional vector space over a finite field F. Definition A code is any subset C  V. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V.

Linear block codes Let V be an n-dimensional vector space over a finite field F. Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Example (choices of bases for V and code C): Basis of V (fixed): 001,010,100 Set of V elements:{ 000,001,010,011,100,101,110,111 } Set of C elements:{ 000,001,010,011 } 2 alternative bases for code C: 001, ,011 Essentially, we will be ready to consider alternative bases, but will stick to “main one” for representation of V elements.

Linear block codes Definition A linear (n,k) code is any k-dimensional subspace C ⊑ V. Definition The weight wt(v) of a vector v  V is a number of nonzero components of v in its representation as a linear combination v = a 1 b a n b n. Definition The distance d(v,w) between vectors v,w  V is a number of distinct components of these vectors. Definition The minimum weight of code C ⊑ V is defined as min v  C,v  0 wt(v). A linear (n,k) code with minimum weight d is often referred to as (n,k,d) code.

Linear block codes Theorem Linear (n,k,d) code can correct any number of errors not exceeding t =  (d  1)/2 . Proof The distance between any two codewords is at least d. So, if the number of errors is smaller than d/2 then the closest codeword to the received vector will be the transmitted one However a far less obvious problem: how to find which codeword is the closest to received vector?

LInear codes - the main problem A good (n,k,d) code has small n, large k and large d. The main coding theory problem is to optimize one of the parameters n, k, d for given values of the other two.

Generator matrices Definition Consider (n,k) code C ⊑ V. G is a generator matrix of code C, if C = {vG | v  V} and all rows of G are independent. It is easy to see that generator matrix exists for any code – take any matrix G rows of which are vectors v 1, ,v k (represented as n-tuples in the initially agreed basis of V) that form a basis of C. By definition G will be a matrix of size k  n. Obviously there can be many different generator matrices for a given code. For example, these are two alternative generator matrices for the same (4,3) code:

Equivalence of codes Definition Codes C 1,C 2 ⊑ V. are equivalent, if a generator matrix G 2 of C 2 can be obtained from a generator matrix G 1 of C 1 by a sequence of the following operations: 1) permutation of rows 2) multiplication of a row by a non-zero scalar 3) addition of one row to another 4) permutation of columns 5) multiplication of a column by a non-zero scalar (not needed for binary) Note that operations 1-3 actually doesn’t change the code C 1. Applying operations 4 and 5 C 1 could be changed to a different subspace of V, however the weight distribution of code vectors remains the same. In particular, if C 1 is (n,k,d) code so is C 2. In binary case vectors of C 1 and C 2 would differ only by permutation of positions.

Generator matrices Definition A generator matrix G of (n,k) code C ⊑ V is said to be in standard form if G = (I,A), where I is k  k identity matrix. Theorem For code C ⊑ V there is an equivalent code C that has a generator matrix in standard form.

Hamming code [7,4] Parity bits of H(7,4) No errors - all p i -s correspond to d i -s Error in d 1,...,d 3 - a pair of wrong p i -s Error in d 4 - all pairs of p i -s are wrong Error in p i - this will differ from error in some of d i -s So: - we can correct any single error - since this is unambiguous, we should be able to detect any 2 errors

Hamming code [7,4] a = , b = and c = H - parity check matrix Why it does work? We can check that without errors yH = 000 and that with 1 error yH gives the index of damaged bit... General case: there always exists matrix for checking orthogonality yH = 0. Finding of damaged bits however isn’t that simple.

Hamming codes For simplicity we will consider codes over binary fields, although the definition (and design idea) easily extends to codes over arbitrary finite fields. Definition For a given positive integer r a Hemming code Ham(r) is code a parity check of which as its rows contains all possible non-zero r-dimensional binary vectors. There are 2 r  1 such vectors, thus parity check matrix has size 2 r  1  r and respectively Ham(r) is (n = 2 r  1,n  r) code.

Hamming codes Definition For a given positive integer r a Hemming code Ham(r) is code a parity check of which as its rows contains all possible non-zero r-dimensional binary vectors. Example of Hamming code Ham(4): Also not required by definition, note that in this particular case columns can be regarded as consecutive integers 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 written in binary form.

Dual codes Definition Consider code C ⊑ V. A dual or orthogonal code of C is defined as C  = {v  V |  w  C: vw = 0}. It is easy check that C  ⊑ V, i.e. C  is a code. Note that actually this is just a re-statement of definition of orthogonal vector spaces we have already seen. There are codes that are self-dual, i.e. C = C .

Dual codes - some examples For the (n,1) -repetition code C, with the generator matrix G = (1 1 … 1) the dual code C  is (n, n  1) code with the generator matrix G , described by: 

Dual codes - some examples [Adapted from V.Pless]

Dual codes - some examples [Adapted from V.Pless]

Dual codes – parity checking matrices Definition Let code C ⊑ V and let C  be its dual code. A generator matrix H of C  is called a parity checking matrix of C. Theorem If k  n generator matrix of code C ⊑ V is in standard form if G = (I,A) then (k  n)  n matrix H = (  A T,I) is a parity checking matrix of C. Proof It is easy to check that any row of G is orthogonal to any row of H (each dot product is a sum of only two non-zero scalars with opposite signs). Since dim C + dim C  = dim V, i.e. k + dim C  = n we have to conclude that H is a generator matrix of C . Note that in binary vector spaces H = (  A T,I) = (A T,I).

Dual codes – parity checking matrices Theorem If k  n generator matrix of code C ⊑ V is in standard form if G = (I,A) then (k  n)  n matrix H = (  A T,I) is a parity checking matrix of C. So, up to the equivalence of codes we have an easy way to obtain a parity check matrix H from a generator matrix G in standard form and vice versa. Example of generator and parity check matrices in standard form: