DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

Cyclic Code.
Error Control Code.
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )
15-853:Algorithms in the Real World
Information and Coding Theory
CHANNEL CODING REED SOLOMON CODES.
Cellular Communications
DIGITAL COMMUNICATION Coding
Error detection/correction FOUR WEEK PROJECT 1 ITEMS TO BE DISCUSSED 1.0 OVERVIEW OF CODING STRENGTH (3MINS) Weight/distance of binary vectors Error detection.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.
Error detection and correction
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
DIGITAL COMMUNICATION Coding
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
Error Detection and Correction Rizwan Rehman Centre for Computer Studies Dibrugarh University.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Syndrome Decoding of Linear Block Code
Linear Codes.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION 2-dimensional transmission A.J. Han Vinck May 1, 2003.
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany Data communication line codes and constrained sequences A.J. Han Vinck Revised.
Channel Coding Part 1: Block Coding
CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck January 2011.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck June 2004.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Cyclic Code. Linear Block Code Hamming Code is a Linear Block Code. Linear Block Code means that the codeword is generated by multiplying the message.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
ERROR CONTROL CODING Basic concepts Classes of codes: Block Codes
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Information Theory Linear Block Codes Jalal Al Roumy.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Perfect and Related Codes
Some Computation Problems in Coding Theory
Error Detection and Correction
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
INFORMATION THEORY Pui-chor Wong.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
1 Product Codes An extension of the concept of parity to a large number of words of data 0110… … … … … … …101.
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
II. Linear Block Codes. © Tallal Elshabrawy 2 Digital Communication Systems Source of Information User of Information Source Encoder Channel Encoder Modulator.
ECE 442 COMMUNICATION SYSTEM DESIGN LECTURE 10. LINEAR BLOCK CODES Husheng Li Dept. of EECS The University of Tennessee.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Class Report 林格名 : Reed Solomon Encoder. Reed-Solomom Error Correction When a codeword is decoded, there are three possible outcomes –If 2s + r < 2t (s.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Introduction to Information theory
Subject Name: Information Theory Coding Subject Code: 10EC55
II. Linear Block Codes.
RS – Reed Solomon List Decoding.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
DIGITAL COMMUNICATION Coding
Cyclic Code.
II. Linear Block Codes.
Chapter 10 Error Detection and Correction
Presentation transcript:

DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck

Position of Error Control Coding signal generator channel detector k input bits k output bits channel k k input bits k output bits k input bits signal generator coded signal generator detector detector/decoder n input bits n ECC coding coded modulation

Encoding code wordReplace a message of k information bits by a unique n bit word, called code word CODEThe collection of 2 k code words is called a CODE

Error control code with rate k/n message estimate channel decoder n Code word in receive There are 2 k code words of length n 2k2k Code book contains all processing

A pictorial view 2 n vectors2 k code words

decoder Compare received word with all possible codewords code words received Decode the code word with minimum # of differences („Most Likely“)

example code words: received: difference: best guess: only 1 difference

we have some problems Mapping from information to code words –generation of code words (mutually far apart) –storing of code book (2 k code words, length n) Decoding –Compare a received word with all possible code words

Definitions Hamming distance between x and y is d H := d(x, y) is the # of positions where x i  y i The minimum distance of a code C is – d min = min { d(x, y) | x  C, y  C, x y} Hamming weight of a vector x is - w(x) := d(x, 0 ) is the # of positions where x i  0

example Hamming distance d( 1001, 0111) = 3 Minimum distance (101, 011, 110) = 2 Hamming weight w( ) = 4 Hamming was a famous scientist from Bell-lab and inventer of the Hamming code.

Performance A code with minimum distance d min is capable of correcting t errors if d min  2 t + 1. Proof: If  t errors occur, then since d min  2 t + 1 an incorrect code word has at least t+1 differences with the received word.

picture 2t+1 differences A B  t differences from A  t differences from B

LINEAR CODES Binary codes are called linear iff  the component wise modulo-2 sum of two code words is again a code word. Consequently, the all zero word is a code word.

LINEAR CODE GENERATOR The code words are - linear combinations of the rows of a binary generator matrix G with dimensions k, n - G must have rank k! Example: Consider k = 3, n = generator matrix G = (1,0,1)G = ( 0, 0, 1, 0, 1, 1)

Systematic codes Let in general the matrix G be written as | G = [ I k P ];G=0 1 0 | | k = 3, n = 6 The code generated is –linear, systematic – has minimum distance 3. –the efficiency of the code is 3/6.

Example (optimum) Single Parity check code d min = 2, k = n   0 1 G = [ I n-1 P ]=  00  01 1 All codewords have even weight!

Example (optimum) Repetition code: d min = n, k = 1 G = [ 1 1  1 ]

Equivalent codes Any linear code generator can be brought in “systematic form” G sys = k n k n n Note: the elementary operation have an inverse. Homework: give an example for k = 4 and n = 7 Elementary row operations Elementary column operations Non- systematic form

Bounds on minimum distance (Hamming) Linear codes have a systematic equivalent G –Minimum Hamming weight  n – k + 1 (Singleton bound) # code words * # correctable error patterns  2 n Homework: show that Hamming codes satisfy the bound with equality!

Bounds on minimum distance (Gilbert) Start: Select codeword from 2 n possible words 1. Remove all words at distance < d min from selected codeword 2. Select one of the remaining as next codeword 3. Goto 1. unless no possibilities left. RESULT: homework: show that logM/n  1 – h(2p) for d min -1 = 2t  2pn; p < ¼

plot R = log 2 M/n p  t/n 1-h(p) 1-h(2p) 0 singleton

Property The set of distances from all code words to the all zero code word is the same as to any other code word. Proof: d( x, y ) = d( x  x, z = y  x ) = d( 0, z ), by linearity z is also a code word.

Thus! the determination of the minimum distance of a code is equivalent to the determination of the minimum Hamming weight of the code words. The complexity of this operation is proportional to # of code words

example Consider the code words –00000 –01101 –10011 –11110 Homework: Determine the minimum distance

Linear code generator I(X) represents the k bit info vector ( i 0, i 1,..., i k-1 ) g(X) is a binary polynomial of degree ( n-k ) THEN: the code vector C of length n can be described by C(X) = I(X) g(X) all operations modulo-2.

EX: k = 4, n = 7 and g(X) = 1 + X + X 3 For the information vector (1,0,1,0) C(X) = (1 + X 2 ) ( 1 + X + X 3 ) = 1 + X + X 2 + X 5  (1,1,1, 0, 0,1, 0). the encoding procedure in (k x n) matrix form: G = c = I * G

Implementation with a shift-register The following shift register can be used: g(X) = ( 1 + X + X 3 ) i k-1... i 2 i 1 i 0 Homework: give a description of the shift control to obtain the result

Some remarks Generators for different k and n –are constructed using mathematics –listed in many text books What remains is the decoding!

Hamming codes Minimum distance 3 Construction –G = I m All k-tuples of Hamming weight > 1 –where m = Check that the minimum distance is 3! Give the efficiency of the code

Example k = 4, n = G =

Syndrome decoding Let G = [ I k P ] then construct H T = P I n-k For all code words c = xG, cH T = xGH T = 0 Hence, for a received noisy vector ( c  n ) H T = c H T  n H T = n H T = : S

example G = H T = x = c = c H T = n = c  n = [c  n] H T = S = Obvious fast decoder: precalculate all syndromes at receiver for correctable errors

In system form c  nc  n Calculate syndrome [c  n] H T = S Precalculated syndromes n* c  nc  n c  n  n* when n = n* then n  n* = 0 Homework: choose parameters that can be implemented

Reed Solomon Codes (CD, DVD) Structure: m k information symbols n-k check symbols Properties: minimum distance = n-k+1 (symbols) length 2 m -1

General remarks The general problem is the decoding –RS codes can be decoded using Euclids algorithm Berlekamp Massey algorithm

Why error correction? Systems with errors can be made almost error free –CD, DvD would not work without RS codes

Why error correction? In ARQ systems systems collaps can be postponed! troughput 100% k/n % Channel error probability 01

For Additive White Gaussian Noise Channels Error probability p  e -Es/No –where Es is energy per transmitted symbol –No the one-sided noise power spectral density For an uncoded system p  e -Eb/No For a coded system with minimum distance d –nEs = kEb and thus p c  e -d k/n Eb/No CONCLUSION: make Coding gain = d k/n > 1