Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

Cyclic Code.
Error Control Code.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Information Theory Introduction to Channel Coding Jalal Al Roumy.
IV054 CHAPTER 2: Linear codes ABSTRACT
Cellular Communications
DIGITAL COMMUNICATION Coding
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
DIGITAL COMMUNICATION Coding
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
3F4 Error Control Coding Dr. I. J. Wassell.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
exercise in the previous class (1)
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Syndrome Decoding of Linear Block Code
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
4-2 binary fields and binary vector spaces Special Thanks to Dr. Samir Al-Ghadhban & EE430 Students.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Basic Characteristics of Block Codes
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Linear Block Code 指導教授:黃文傑 博士 學生:吳濟廷
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Information Theory Linear Block Codes Jalal Al Roumy.
Authentication protocol providing user anonymity and untraceability in wireless mobile communication systems Computer Networks Volume: 44, Issue: 2, February.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Perfect and Related Codes
Error Detection and Correction – Hamming Code
Some Computation Problems in Coding Theory
Error Detection and Correction
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
INFORMATION THEORY Pui-chor Wong.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Some bounds on code parameters. Hemming and Golay codes. Syndrome.
Hamming Distance & Hamming Code
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Reed-Solomon Codes Rong-Jaye Chen.
II. Linear Block Codes. © Tallal Elshabrawy 2 Digital Communication Systems Source of Information User of Information Source Encoder Channel Encoder Modulator.
ECE 442 COMMUNICATION SYSTEM DESIGN LECTURE 10. LINEAR BLOCK CODES Husheng Li Dept. of EECS The University of Tennessee.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Information and Coding Theory
Part 2 Linear block codes
Information and Coding Theory
15-853:Algorithms in the Real World
Algorithms in the Real World
Chapter 6.
Welcome to the presentation. Linear Block Codes Almost all block codes used today belong to a subset called linear block codes. The exclusive OR of two.
Subject Name: Information Theory Coding Subject Code: 10EC55
II. Linear Block Codes.
IV054 CHAPTER 2: Linear codes ABSTRACT
Information Redundancy Fault Tolerant Computing
DIGITAL COMMUNICATION Coding
Cyclic Code.
I.4 Polyhedral Theory (NW)
II. Linear Block Codes.
I.4 Polyhedral Theory.
Presentation transcript:

Block codes

encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples is denoted by Q n. Hamming distance = d(x,y) = |{1 ≤ i ≤ n | x i ≠ y i }|. the more x and y differ, the more unlikely one will be received if the other was transmitted.

Hamming distance – measures the minimum number of substitutions required to change one string into the other

For d(x,x) = 0, d(x,y) = d(y,x) and that the triangle inequality holds: d(x,z) ≤ d(x,y) + d(y,z) for any x,y and z in Q n. indicates hamming distances to be a of distance function To maximize the error-protection, codewords must have sufficient mutual distance.

If C is a block code of length n in any subset of Q n then elements of C are called codeword and when |C|=1 is called trivial. The minimum distance d of a non-trivial code C is given by d = min{d(x,y) | x ∈ C,y ∈ C,x ≠ y}. The error-correcting capability e is defined by e =└ d-1/2 ┘ codes can also be used for error-detection

Let t be some integer, t ≤ e. From d definition, no word in Q n can be at distance at most t from one codeword, while at the same time being at distance up to d−t−1 from some other codeword. a code C with minimum distance d can also be used to correct errors and erasures – If e and f be fixed such that 2e + f < d.

A received word r with at most f erasures cannot have distance ≤ e from two different codewords c 1 and c 2 at the other n − f coordinates, this will imply that d(c 1,c 2 ) ≤ 2e + f < d. It follows that C is e-error, f-erasure- correcting.

(Hamming bound

The rate of a q-ary code C of length n is defined by

Singleton bound If C be a q-ary (n,M,d) code then, – Then M ≤ q n−d+1. – where M is the cardinality Codes with parameters (n,qn−d+1,d) are called maximum-distance-separable codes (Singleton bound)

Gilbert-Varshamov bound

2.2Linear codes Most of the important codes are special types of so-called linear codes. Linear codes are of importance because they have very concise description, very nice properties, very easy encoding and in principle quite easy decoding

Linear codes are special sets of words of the length n over an alphabet Q{0,..,q -1}, where q is a power of prime. Since now on sets of words Q q n will be considered as vector spaces V(n,q) of vectors of length n with elements from the set {0,..,q -1} and arithmetical operations will be taken modulo q. The set {0,..,q -1} with operations + and  modulo q is called also the Galois field GF(q).

the error pattern, caused by the channel, is the vector e with r = c + e. The Hamming weight w(x) of a vector x is the number of non-zero coordinates in x. So w(x) = d(x,0).

With Q n having the structure of the vector space V n (q), the most important general class of codes can be defined. A linear code C of length n is any linear subspace of V n (q). Linear codes are also called “group codes“. If C has dimension k and minimum distance d, one says that C is an [n,k,d] code.

For linear codes without further structure, The minimum distance of a linear code C is equal to the minimum non-zero weight in C.

Generator matrix An (n,k) linear code is a k-dimensional subspace of the vector space of all the binary n-tuples, so it is possible to find k linearly independent code words g 0,g 1,···,g k−1 to span this space. So any code word can be written as a linear combination of these base vectors: c = m 0 g 0 + m 1 g 1 +···+ m k−1 g k−1

We can arrange these k linearly independent code words (vectors) as the rows of a k ×n matrix as follows: c = mG

Definition: If in all the codewords we can find exactly the corresponding information sequence, the code is called systematic. It is convenient to group all these bits either at the end or at the beginning of the code word. In this case the generator matrix can be divided into two sub matrices [P|I]. I is identity matrix and P is parity check matrix

Parity check matrix For any k ×n matrix G with k linearly independent rows, there exists an (n−k)×n matrix H with (n−k) linearly independent rows such that any vector in the row space of G is orthogonal to the rows of H and vice versa. So we can say: An n-tuple c is a codeword generated by G if and only if cH T = 0 Matrix H is called Parity check matrix. The rows of H span a code space called dual code of G. GH T = 0

For the systematic code G = [P n×n−k I k×k ], the parity check matrix is simply H = [I (n-k)x(n-k )P T (n-k)xk ] An n-tuple vector r is a codeword if rH T = 0 The product S = rH T is a (1×n−k) vector and is called the syndrome of r. If the syndrome is not zero, a transmission error is declared.

r is the received sequence r = c + e S = rH T = (c + e)H T = cH T + eH T = eH T