Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Ch 7.7: Fundamental Matrices
Chapter Matrices Matrix Arithmetic
February 26, 2015Applied Discrete Mathematics Week 5: Mathematical Reasoning 1 Addition of Integers Example: Add a = (1110) 2 and b = (1011) 2. a 0 + b.
Chapter 4 Systems of Linear Equations; Matrices
Chapter 4 Systems of Linear Equations; Matrices
Information and Coding Theory
Linear Equations in Linear Algebra
Eigenvalues and Eigenvectors
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Chapter 5 Orthogonality
Linear System of Equations MGT 4850 Spring 2008 University of Lethbridge.
Copyright © Cengage Learning. All rights reserved.
Linear Equations in Linear Algebra
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Syndrome Decoding of Linear Block Code
 Row and Reduced Row Echelon  Elementary Matrices.
Linear Codes.
Chapter 2 Determinants. The Determinant Function –The 2  2 matrix is invertible if ad-bc  0. The expression ad- bc occurs so frequently that it has.
Systems of Linear Equation and Matrices
Chap. 2 Matrices 2.1 Operations with Matrices
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Inc.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
Chapter 9 Matrices and Determinants Copyright © 2014, 2010, 2007 Pearson Education, Inc Multiplicative Inverses of Matrices and Matrix Equations.
8.1 Matrices & Systems of Equations
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Chapter 8 Matrices and Determinants Matrix Solutions to Linear Systems.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Basic Concepts of Encoding Codes and Error Correction 1.
Latin squares Def: A Latin square of order n is a quadruple (R, C, S; L) where R, C and S are sets of cardinality n and L is a mapping L: R × C → S such.
Greatest Common Divisors & Least Common Multiples  Definition 4 Let a and b be integers, not both zero. The largest integer d such that d|a and d|b is.
SECTION 9 Orbits, Cycles, and the Alternating Groups Given a set A, a relation in A is defined by : For a, b  A, let a  b if and only if b =  n (a)
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Finite Element Method. History Application Consider the two point boundary value problem.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
REVIEW Linear Combinations Given vectors and given scalars
Linear Equations in Linear Algebra
Eigenvalues and Eigenvectors
Matrix Algebra.
Linear Equations in Linear Algebra
Linear Equations in Linear Algebra
Chapter 2 Determinants Basil Hamed
Matrix Algebra.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the set of all words of length n over F by F (n).

Weight : Let a=a 1, a 2, …, a n  F (n) Then the number  ai≠0 1 is called the weight of the word a & is denoted by wt(a). i.e. weight of a word is the number of non zero entries of that word.

Distance : Let a=a 1, a 2, …, a n & b=b 1, b 2, …, b n be two words of length n over the field F. We define the distance between a & b (written d(a,b)) by d(a,b)=  i=1,…,n d(a i,b i ) Where d(a i,b i ) = 0 if a i =b i 1 if a i ≠b i

Code : A block (m,n) code over a field F consist of an encoding function E : F (m)  F (n) (m < n) & a decoding function D : F (n)  F (m) such that DoE is the identity function or almost near to identity function.

Elements of F (m) are called message words and the elements of image of E in F (n) are called code words. The collection of all the code words in F (n) is denoted by C. Thus C is a subset of the vector space F (n). If C becomes a subgroup of F (n) then we say that C is a Group Code.

Message Word Code Word Received Word Code Word Message Word E D Channel

Procedure for constructing Hamming code: a)Let r be a positive integer greater then 2 (r>2). Then the message word in the code C are of length m = 2 r -r-1 & the code words are of length n = 2 r -1. b) Let (b 1,b 2,…,b n ) be the code word corresponding to the message word (a 1,a 2,…,a n ) in which b 2^0, b 2^1,…, b 2^(r-1) are check digits

And the remaining positions are filled by the digits of the message word in the order in which they are occuring in the message word. c) Let M be a matrix of the type n  r whose i th row is the binary representation of the number i. Consider the matrix equation bM=0. Let M 1, M 2,…,M r be the r columns of M then the matrix equation bM=0 gives r linear equations bM 1 =0, bM 2 =0,…,bM r =0.

First We prove that each of these r equations contains exactly one check digit. Suppose b 2^i occurs in the equation which is obtained by multiplying b with the k th column of the matrix M. Then (2 i ) th entry of k th column is 1. This entry is in the (2 i ) th row of M & (2 i ) th row of M is the binary representation of the number 2 i. Now the binary representation of 2 i has 1 at (i +1) th place and 0 elsewhere i.e. 1 at the (r - i) th place from the left.  k=r - i

Now consider the equation bM k =0 Suppose this equation contains two check symbols b 2^i & b 2^j. Then as before k=r – i & k=r – j i.e. i = j. Thus there is atmost one check symbol present in the equation bM k =0 Again we have shown that each check symbol b 2^i occurs in the linear equation bM k =0 for k=r-i. The number of check symbols & the number of

Linear equations is same and equal to r & none of the equation have more than one check symbol so there is exactly one check symbol in each of the r linear equations. Solving these r linear equations we get the unique value of each check symbol. Hence E(a 1,a 2,…,a m )=(b 1,b 2,…,b n ) is the encoding function for the code C.

Theorem: Prove that Hamming code is a group code. Proof: Let C be a (m,n) hamming code over the field F. We know that m= 2 r -r-1 & n=2 r -1 for some positive integer r  2. The encoding function E of C is given by E: F (m)  F (n) E(a 1,a 2,…,a m )=(b 1,b 2,…,b n ) Where b 2^0,b 2^2,…,b 2^(r-1) are check digits and the remaining positions are filled by the digits of the message word

in the order in which they appear in the message word. The check digits are given by the matrix equation bM=0 where M is the n  r matrix in which i th row is the binary representation of the number i. Supppose E(a’)=E(a 1 ’,a 2 ’,…,a m ’) =b 1 ’,b 2 ’,…,b n ’ Now a+a’=(a 1 +a 1 ’),(a 2 +a 2 ’),…,(a m +a m ’) & b+b’=(b 1 +b 1 ’),(b 2 +b 2 ’),…,(b n +b n ’) Also bM=0 & b’M=0

 (b+b’)M=bM+b’M =0 + 0 = 0 Also in b+b’, b 2^0 +b’ 2^0, b 2^1 +b’ 2^1, …, b 2^(r- 1) +b’ 2^(r-1) are check digits & the remaining entries are a 1 +a’ 1, a 2 +a’ 2, …, a m +a’ m in their original order. Thus b+b’ corresponds to the message word a+a’. Hence E(a+a’) = b+b’ = E(a) + E(a’) Thus E is a homomorphism & so C is a group code.

Theorem: Prove that minimum distance of a hamming code is 3. Proof: Let C be a (m,n) hamming code over the field F. We know that m= 2 r -r-1 & n=2 r -1 for some positive integer r  2. The encoding function E of C is given by E: F (m)  F (n) E(a 1,a 2,…,a m )=(b 1,b 2,…,b n ) Where b 2^0,b 2^2,…,b 2^(r-1) are check digits and the remaining positions are filled by the digits of the message word

in the order in which they appear in the message word. The check digits are given by the matrix equation bM=0 where M is the n  r matrix in which ith row is the binary representation of the number i. Again let M 1, M 2, …, M r be the r columns of M then bM=0 is equivalent to r linear equations bM 1 =0, bM 2 =0, …, bM r =0. Also we know that each linear equation contains one & only one check symbol.

We have also shown that C is a group code. So to prove that minimum distance of C is  3 it is sufficient to prove that each non zero code word is of weight  3. Now by definition of C all the digits of the message word apppear in the code word so we must prove that the weight of a code word corresponding to the message word of weight 1 or 2 is always greater than or equal to 3.

Case (1) Let a=a 1,a 2,…,a m be a message word of weight 1 & let b=b 1,b 2,…,b n be the corresponding code word. Let the non zero entry of a occurs at the i th position in b i.e. b i ≠0. Since b i is a message digit so i is not a power of 2. Therefore the binary representation of i contains atleat two non zero entries i.e. i th row of M contains atleast two non zero entries. Suppose s th & t th entries of i th row are non zero.

So (i,s) th and (i,t) th entry of M is 1. Consider the equations bM t =0 & bM s =0 Let b 2^k be the check symbol present in the equation bM t =0 & b 2^l be the check symbol present in the equation bM s =0. Since all the entries of the message word except b i are zero. So bM t =0 is of the form b 2^k +b i =0 i.e. b 2^k = -b i ≠ 0. Similarly b 2^l is also non zero.

Hence the code word b corresponding to the message word a of weight 1 contains atleast two non zero check digits. So wt(b)  3 Case (2) Suppose the weight of the message word a=a 1,a 2,…,a m is 2 then the corresponding code word b=b 1,b 2,…,b n Contains two non zero entries corresponding to the message word a.

Let these entries be b i & b j. We may also suppose that i < j. Since b i & b j are the entries of a therefore i & j are not the power of 2. Again i ≠ j.  The binary representation of i & j must differ atleast at one place. Let the binary representation of i & j differs at s th place (from the left). By interchanging i & j we may suppose that (i,s) th entry of M is 1 & (j,s) th entry of

M is zero. Consider the linear equation bM s =0. This linear equation contain the unique check digit say b 2^k. Since every digit of the message word except b i & b j is zero.  bM s =0 is equivalent to b 2^k +b i =0 i.e. b 2^k = -b i ≠ 0. Thus the code word b corresponding to the message word a contains atleast one non zero check symbol.

Since it already contains two non zero check symbols so wt(b)  3. Thus in all the cases wt(b)  3. Now to show minimum distancs of hamming code is 3 we are to find a code word of weight 3. Let b=b 1, b 2, …, b n be the code word corresponding to the message word a=a 1, a 2, …, a m where a 1 =1 and a i =0 for 2  i  m. So b 3 =1.

Consider the equation bM s =0 then the check symbol present in the equation takes the value 1 iff third entry of the s th column of M is 1. So the number of non zero check symbols in b is equal to the number of non zero entries in the third row of M. Third row of M is the binary representation of the number 3 which contains exactly 2 1’s. Thus b contains exactly two non zero check symbols and one non zero message symbol. So wt(b) = 3.

Do any two:  Construct a Hamming (4,7) code.  Prove that hamming code is a group code.  Prove that minimum distance of hamming code is 3.