The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

Error Control Code.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
15-853:Algorithms in the Real World
Information and Coding Theory
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
DIGITAL COMMUNICATION Coding
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Chapter 11 Algebraic Coding Theory. Single Error Detection M = (1, 1, …, 1) is the m  1 parity check matrix for single error detection. If c = (0, 1,
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
3F4 Error Control Coding Dr. I. J. Wassell.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
exercise in the previous class (1)
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Chapter 3 Digital Logic Structures. Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 3-2 Building Functions.
Syndrome Decoding of Linear Block Code
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
SPANISH CRYPTOGRAPHY DAYS (SCD 2011) A Search Algorithm Based on Syndrome Computation to Get Efficient Shortened Cyclic Codes Correcting either Random.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
PEDS: A PARALLEL ERROR DETECTION SCHEME FOR TCAM DEVICES Author: Anat Bremler-Barr, David Hay, Danny Hendler and Ron M. Roth Publisher/Conf.: IEEE INFOCOM.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
ERROR CONTROL CODING Basic concepts Classes of codes: Block Codes
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
Hamming codes. Golay codes.
1 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 20 (2009)
EE 430 \ Dr. Muqaibel Cyclic Codes1 CYCLIC CODES.
ADVANTAGE of GENERATOR MATRIX:
Linear Block Code 指導教授:黃文傑 博士 學生:吳濟廷
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Perfect and Related Codes
Error Detection and Correction – Hamming Code
Some Computation Problems in Coding Theory
Error Detection and Correction
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
INFORMATION THEORY Pui-chor Wong.
Hamming Distance & Hamming Code
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
II. Linear Block Codes. © Tallal Elshabrawy 2 Digital Communication Systems Source of Information User of Information Source Encoder Channel Encoder Modulator.
ECE 442 COMMUNICATION SYSTEM DESIGN LECTURE 10. LINEAR BLOCK CODES Husheng Li Dept. of EECS The University of Tennessee.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Part 2 Linear block codes
Subject Name: Information Theory Coding Subject Code: 10EC55
II. Linear Block Codes.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Information Redundancy Fault Tolerant Computing
Standard Array.
주민등록번호.
II. Linear Block Codes.
Presentation transcript:

The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.

Basic Definitions Linearity: where m is a k-bit information sequence c is an n-bit codeword. is a bit-by-bit mod-2 addition without carry Linear code: The sum of any two codewords is a codeword. Observation: The all-zero sequence is a codeword in every linear block code.

Basic Definitions (cont ’ d) Def: The weight of a codeword c i, denoted by w(c i ), is the number of of nonzero elements in the codeword. Def: The minimum weight of a code, w min, is the smallest weight of the nonzero codewords in the code. Theorem: In any linear code, d min = w min Systematic codes Any linear block code can be put in systematic form n-k check bits k information bits

linear Encoder. By linear transformation c =m ⋅ G = m 0 g 0 + m 1 g 0 +……+ m k-1 g k-1 The code C is called a k-dimensional subspace. G is called a generator matrix of the code. Here G is a k ×n matrix of rank k of elements from GF(2), g i is the i-th row vector of G. The rows of G are linearly independent since G is assumed to have rank k.

Example: (7, 4) Hamming code over GF(2) The encoding equation for this code is given by c 0 = m 0 c 1 = m 1 c 2 = m 2 c 3 = m 3 c 4 = m 0 + m 1 + m 2 c 5 = m 1 + m 2 + m 3 c 6 = m 0 + m 1 + m 3

Linear Systematic Block Code: An (n, k) linear systematic code is completely specified by a k × n generator matrix of the following form. where I k is the k × k identity matrix.

Linear Block Codes the number of codeworde is 2 k since there are 2 k distinct messages. The set of vectors {g i } are linearly independent since we must have a set of unique codewords. linearly independent vectors mean that no vector g i can be expressed as a linear combination of the other vectors. These vectors are called baises vectors of the vector space C. The dimension of this vector space is the number of the basis vector which are k. G i є C  the rows of G are all legal codewords.

Hamming Weight the minimum hamming distance of a linear block code is equal to the minimum hamming weight of the nonzero code vectors. Since each g i єC,we must have W h (g i ) ≥ d min this a necessary condition but not sufficient. Therefore, if the hamming weight of one of the rows of G is less than d min,  d min is not correct or G not correct.

Generator Matrix All 2 k codewords can be generated from a set of k linearly independent codewords. The simplest choice of this set is the k codewords corresponding to the information sequences that have a single nonzero element. Illustration: The generating set for the (7,4) code: 1000 ===> ===> ===> ===>

Generator Matrix (cont ’ d) Every codeword is a linear combination of these 4 codewords. That is: c = m G, where Storage requirement reduced from 2 k (n+k) to k(n-k).

Parity-Check Matrix For G = [ P | I k ], define the matrix H = [I n-k | P T ] (The size of H is (n-k)xn). It follows that GH T = 0. Since c = mG, then cH T = mGH T = 0.

Encoding Using H Matrix information

Encoding Circuit

The Encoding Problem (Revisited) Linearity makes the encoding problem a lot easier, yet: How to construct the G (or H) matrix of a code of minimum distance d min ? The general answer to this question will be attempted later. For the time being we will state the answer to a class of codes: the Hamming codes.

Hamming Codes Hamming codes constitute a class of single-error correcting codes defined as: n = 2 r -1, k = n-r, r > 2 The minimum distance of the code d min = 3 Hamming codes are perfect codes. Construction rule: The H matrix of a Hamming code of order r has as its columns all non-zero r-bit patterns. Size of H: r x(2 r -1)=(n-k)xn

Decoding Let c be transmitted and r be received, where r = c + e e = error pattern = e 1 e e n, where The weight of e determines the number of errors. If the error pattern can be determined, decoding can be achieved by: c = r + e + c e r

Decoding (cont ’ d) Consider the (7,4) code. (1) Let be transmitted and be received. Then: e = ( an error in the fourth location) (2) Let r = What was transmitted? c e # # # The first scenario is the most probable.

Standard Array correctable error patterns

Standard Array (cont ’ d) 1. List the 2 k codewords in a row, starting with the all-zero codeword c Select an error pattern e 1 and place it below c 0. This error pattern will be a correctable error pattern, therefore it should be selected such that: (i) it has the smallest weight possible (most probable error) (ii) it has not appeared before in the array. 3. Repeat step 2 until all the possible error patterns have been accounted for. There will always be 2 n / 2 k = 2 n-k rows in the array. Each row is called a coset. The leading error pattern is the coset leader.

Standard Array Decoding For an (n,k) linear code, standard array decoding is able to correct exactly 2 n-k error patterns, including the all-zero error pattern. Illustration 1: The (7,4) Hamming code # of correctable error patterns = 2 3 = 8 # of single-error patterns = 7 Therefore, all single-error patterns, and only single- error patterns can be corrected. (Recall the Hamming Bound, and the fact that Hamming codes are perfect.

Standard Array Decoding (cont ’ d) Illustration 2: The (6,3) code defined by the H matrix:

Standard Array Decoding (cont ’ d) Can correct all single errors and one double error pattern

The Syndrome Huge storage memory (and searching time) is required by standard array decoding. Define the syndrome s = vH T = (c + e) H T = eH T The syndrome depends only on the error pattern and not on the transmitted codeword. Therefore, each coset in the array is associated with a unique syndrome.

The Syndrom (cont ’ d)

Syndrome Decoding Decoding Procedure: 1. For the received vector v, compute the syndrome s = vH T. 2. Using the table, identify the error pattern e. 3. Add e to v to recover the transmitted codeword c. Example: v = ==> s = 001 ==> e = Then, c = Syndrome decoding reduces storage memory from nx2 n to 2 n-k (2n-k). Also, It reduces the searching time considerably.

Decoding of Hamming Codes Consider a single-error pattern e (i), where i is a number determining the position of the error. s = e (i) H T = H i T = the transpose of the i th column of H. Example:

Decoding of Hamming Codes (cont ’ d) That is, the (transpose of the) i th column of H is the syndrome corresponding to a single error in the i th position. Decoding rule: 1. Compute the syndrome s = vH T 2. Locate the error ( i.e. find i for which s T = H i ) 3. Invert the i th bit of v.

Hardware Implementation Let v = v 0 v 1 v 2 v 3 v 4 v 5 v 6 ands = s 0 s 1 s 2 From the H matrix: s 0 = v 0 + v 3 + v 5 + v 6 s 1 = v 1 + v 3 + v 4 + v 5 s 2 = v 2 + v 4 + v 5 + v 6 From the table of syndromes and their corresponding correctable error patterns, a truth table can be construsted. A combinational logic circuit with s 0, s 1, s 2 as input and e 0, e 1, e 2, e 3, e 4, e 5, e 6 as outputs can be designed.

Decoding Circuit for the (7,4) HC v rather than r