CHANNEL CODING REED SOLOMON CODES.

Slides:



Advertisements
Similar presentations
Cyclic Code.
Advertisements

Error Control Code.
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )
Information and Coding Theory
Data and Computer Communications Tenth Edition by William Stallings Data and Computer Communications, Tenth Edition by William Stallings, (c) Pearson Education.
Data and Computer Communications
Cellular Communications
Transmission Errors Error Detection and Correction
Chapter 10 Error Detection and Correction
Error Detection and Correction
DIGITAL COMMUNICATION Coding
Error detection/correction FOUR WEEK PROJECT 1 ITEMS TO BE DISCUSSED 1.0 OVERVIEW OF CODING STRENGTH (3MINS) Weight/distance of binary vectors Error detection.
Transmission Errors1 Error Detection and Correction.
DIGITAL COMMUNICATION Coding
Error Detection and Correction
Error Detection and Correction Rizwan Rehman Centre for Computer Studies Dibrugarh University.
Hamming Code Rachel Ah Chuen. Basic concepts Networks must be able to transfer data from one device to another with complete accuracy. Data can be corrupted.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 PART III: DATA LINK LAYER ERROR DETECTION AND CORRECTION 7.1 Chapter 10.
Transmission Errors Error Detection and Correction
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Channel Coding and Error Control
Part.7.1 Copyright 2007 Koren & Krishna, Morgan-Kaufman FAULT TOLERANT SYSTEMS Part 7 - Coding.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Application of Finite Geometry LDPC code on the Internet Data Transport Wu Yuchun Oct 2006 Huawei Hisi Company Ltd.
British Computer Society
Cyclic Codes for Error Detection W. W. Peterson and D. T. Brown by Maheshwar R Geereddy.
Error Coding Transmission process may introduce errors into a message.  Single bit errors versus burst errors Detection:  Requires a convention that.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Data and Computer Communications Chapter 6 – Digital Data Communications Techniques.
Data and Computer Communications by William Stallings Eighth Edition Digital Data Communications Techniques Digital Data Communications Techniques Click.
Cyclic Redundancy Check CRC Chapter CYCLIC CODES Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a codeword.
Linear Feedback Shift Register. 2 Linear Feedback Shift Registers (LFSRs) These are n-bit counters exhibiting pseudo-random behavior. Built from simple.
Basic Characteristics of Block Codes
DIGITAL COMMUNICATIONS Linear Block Codes
EE 430 \ Dr. Muqaibel Cyclic Codes1 CYCLIC CODES.
ADVANTAGE of GENERATOR MATRIX:
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Error Detection and Correction
Computer Communication & Networks Lecture 9 Datalink Layer: Error Detection Waleed Ejaz
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
10.1 Chapter 10 Error Detection and Correction Data can be corrupted during transmission. Some applications require that errors be detected and.
Some Computation Problems in Coding Theory
Error Detection and Correction
Elementary Coding Theory Including Hamming and Reed-Solomom Codes with Maple and MATLAB Richard Klima Appalachian State University Boone, North Carolina.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 PART III: DATA LINK LAYER ERROR DETECTION AND CORRECTION 7.1 Chapter 10.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
1 Product Codes An extension of the concept of parity to a large number of words of data 0110… … … … … … …101.
Network Layer4-1 Chapter 5: The Data Link Layer Our goals: r understand principles behind data link layer services: m error detection, correction m sharing.
Part III: Data Link Layer Error Detection and Correction
Reed-Solomon Codes Rong-Jaye Chen.
CHAPTER 8 CHANNEL CODING: PART 3 Sajina Pradhan
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Class Report 林格名 : Reed Solomon Encoder. Reed-Solomom Error Correction When a codeword is decoded, there are three possible outcomes –If 2s + r < 2t (s.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
V. Non-Binary Codes: Introduction to Reed Solomon Codes
Subject Name: COMPUTER NETWORKS-1
Subject Name: Information Theory Coding Subject Code: 10EC55
DIGITAL COMMUNICATION Coding
Cyclic Code.
Chapter 10 Error Detection and Correction
Presentation transcript:

CHANNEL CODING REED SOLOMON CODES

t = floor ((dmin – 1 )/2) = floor ((n – k )/2) General Description: Reed-Solomon (R-S) codes are nonbinary cyclic codes with symbols made up of m-bit sequences, where m is any positive integer having a value greater than 2. For the most conventional R-S (n, k) code: (n, k) = (2m - 1, 2m -1 - 2t) where k is the number of data symbols being encoded, and n is the total number of code symbols in the encoded block.(Note that :The block length of these codes is n = 2m - I. These can be extended to 2m or 2m + I). t is the symbol-error correcting capability of the code, and (n - k= 2t) is The number of parity symbols (redundant symbols) that must be used to correct t errors. where t obtained from Equation: t = floor ((dmin – 1 )/2) = floor ((n – k )/2) Reed-Solomon(RS) codes achieve the largest possible dmin of any linear code. For Reed-Solomon codes the code minimum distance is given by: dmin = n - k + 1 The encoding algorithm expands a block of k symbols to n symbols by adding (n-k) redundant symbols.

Example: The R-S codes perform well against burst noise. We can understand that by this : Example: Consider an (n, k) = (255, 247) R-S code, where each symbol is made up of m =8 bits (such symbols are typically referred to as bytes). Since n - k = 8 so this code can correct any 4 symbol errors in a block of 255. Imagine the presence of a noise burst, lasting for 25-bit durations and disturbing one block of data during transmission, as illustrated in Figure. Notice that a burst of noise that lasts for a duration of 25 continuous bits, must disturb exactly 4 symbols. The R-S decoder will correct the 4-symbols errors, when a decoder corrects a symbol, it replaces the incorrect symbol with the correct one, whether the error was caused by one bit being corrupted or all 8 bits being corrupted.

:Applications in modern digital comm. Systems R-S codes are used in many digital appliances such as CDs, and it is used in space and satellite communication such as (255, 223)RS code.(this code is NASA standard code for satellite and space communications).

R-S performance as a function of Size, Redundancy : R-S performance as a function of symbol size: The error-correcting codes become more efficient (error performance improves) as the code block size increases. If we assume that the code rate is held constant 7/8,while the block size increases from n = 32 symbols (with m = 5 bits per symbol) to n= 256 (m = 8 bits per symbol).Thus the block size increases from 160 bits to 2048 bits. The performance will be as shown in figure:

R-S performance as a function of Redundancy: As the redundancy (n-k) of an R-S code increases (lower code rate), its implementation grows in complexity . Also, the bandwidth increase. The benefit of increasing redundancy is to improvement the bit-error performance, as can be seen in Figure below, where the code length n is held at a constant 64, while number of data symbols decreases from k = 60 to k = 4 (redundancy increases from 4 symbols to 60 symbols).

αi = ai(x) = ai,0 + ai,1(x) X + … + ai,m-1 Xm-1 FINITE FIELD CONCEPT : In order to understand the encoding and decoding principles of nonbinary codes(such as a Reed-Solomon (R-S) codes) it is necessary to understand the finite fields known as Galois Fields (GF). For any prime number p , there exists a finite field denoted GF(p) containing p elements. GF(Pm) is called extension field of GF(p).(Note that : GF(p) is a subfield of GF(Pm)). Symbols from the extension field GF(2m) are used in the construction of Reed-Solomon (R-S) codes(Note that:The binary field GF(2) is a subfield of the extension field GF(2m)). Besides the numbers 0 and 1, there are additional unique elements in the extension field that will be represented with a new symbol α. We denote each of the nonzero elements of field (α) as a polynomial ai(x) Where: For i = 0, 1, 2, . . ., 2m-2 αi = ai(x) = ai,0 + ai,1(x) X + … + ai,m-1 Xm-1 A good reason for using this notation is to ease handling of coding and decoding. Because each codeword consist of n symbol(nm bits).

Example: (7,3)R-S codewords (GF(23)) (double symbol error correcting R-S code): for m = 3: GF(23) field has 8 elements (zero + 7 non zero elements), so: GF(23) = { 0, α0, α1, α2,. . ., α6} αi = ai(x) = ai,0 + ai,1 X + ai,2 X2 To determine the coefficient of αi we should know the concept of primitive polynomial. A class of polynomials called primitive polynomials is of interest because such functions define the finite fields of GF(2m) which in turn are needed to define R-S code. This polynomial is found directly from the below table by the value of m.

Let’s continue the example of (7,3)R-S codewords: f(𝑋)=1+𝑋+𝑋3 For m=3(from the previous table):The corresponding primitive polynomial is: f(𝑋)=1+𝑋+𝑋3 Extension field elements α can be represented by the contents of a binary linear feedback shift register (LFSR) formed from a primitive polynomial . By starting the circuit in any nonzero state (say 1 0 0) and performing a right-shift at each clock time That is shown below:

Summary: Multiplication and addition in GF(8) (mean m=8): In any R-S coding, we must first know: n and k, from them we can calculate m. Get primitive polynomial from table of the primitive polynomials. Implement LFSR to get values of all nonzero elements (=2m-1). Multiplication and addition in GF(8) (mean m=8): It can be done by these tables:

R-S ENCODING: g(𝑋) =𝑔0+𝑔1𝑋+𝑔2𝑋2+⋯+𝑔2𝑡−1𝑋2𝑡−1+𝑋2𝑡 For all the types of block coding there must be a generator polynomial that has the form: g(𝑋) =𝑔0+𝑔1𝑋+𝑔2𝑋2+⋯+𝑔2𝑡−1𝑋2𝑡−1+𝑋2𝑡 Note that : the generator polynomial is of degree 2t (mean n-k like cyclic codes). We designe the roots of g(X) as: α, α2 , . . . , α 2t. Example: Consider an R-S (7, 3) : n= 7, k=3 ,t=(n-k)/2= 2 so it's Generator polynomial is of degree 4 (2t=4), so it has 4 roots: α, α2, α3, α4 g(𝑋) = (𝑋 −𝛼) (𝑋−𝛼2)(𝑋−𝛼3 )(𝑋−𝛼4) by multiplication and addition using the multiplication and addition tables g(𝑋) =𝛼3+𝛼1𝑋+𝛼0𝑋2+𝛼3𝑋3+𝑋4 Encoding in systematic form: Like cyclic codes.We can think of shifting the message polynomial m(X) to the k rightmost terms of the codeword polynomial and then appending a parity polynomial p(X) by placing it in the leftmost n-k term of the codeword polynomial.

U(𝑋)=𝑝(𝑋)+𝑚(𝑋)𝑋𝑛−𝑘= 𝛼0 + 𝛼2X+ 𝛼4X2 + 𝛼6X3 + 𝛼1X4 + 𝛼3X5 + 𝛼5X6 So the steps of R-S encoding using systematic form is: Multiply m(x) by Xn-k. Divide the resultant by g(x) and let the remainder be p(x) (parity polynomial). The resulting code word U(X) is written as: 𝑈(X)=𝑝(𝑋)+𝑚(𝑋)𝑋𝑛−𝑘 Continue with our example: (7,3)R-S codewords: We demonstrate the previous steps by encoding the three-symbol message: g(X)=𝛼3+𝛼1𝑋+𝛼0𝑋2+𝛼3𝑋3+𝑋4 We first multiply (up shift) the message polynomial m(X)=𝛼1 + 𝛼3X + 𝛼5X2 by (Xn-k = X4) yielding 𝛼1X4 + 𝛼3X5 + 𝛼5X6 We next divide this up shifted message polynomial by the generator polynomial g(X) to get parity polynomial p(X): P(X)= 𝛼0+ 𝛼2X+ 𝛼4X2+ 𝛼6X3 The codeword polynomial can be written : U(𝑋)=𝑝(𝑋)+𝑚(𝑋)𝑋𝑛−𝑘= 𝛼0 + 𝛼2X+ 𝛼4X2 + 𝛼6X3 + 𝛼1X4 + 𝛼3X5 + 𝛼5X6

Systematic Encoding with an (n - k)-Stage Shift Register : Example: (7,3)R-S codewords: *NOTES: 1. Number of stages in the shift register is (n – k = 4). 2. Each stage in the shift register holds 3-bit symbols at a time.(each coefficient 𝛼 has 3 bits) 3. Switch 1 is closed during the first k clock cycles to allow shifting the message symbols into the (n-k) stage shift register. 4. Switch 2 is in the down position during the first k clock cycles (k=3) in order to allow simultaneous transfer of the message symbols directly to an output register (not shown in the Figure). 5. After transfer of the Kth message symbol to the output register, switch 1 is opened and switch 2 is moved to the up position. 6. The remaining (n - k) clock cycles (n-k=4) clear the parity symbols contained in the shift register by moving them to the output register. 7. The total number of clock cycles is equal to n, and the contents of the output register is the codeword polynomial: U(X)= p(X)+Xn-km(X)

U(X) = 𝛼0 + 𝛼2X + 𝛼4X2 + 𝛼6X3 + 𝛼1X4 + 𝛼3X5 + 𝛼5X6 Continue with our example: The operational steps (analysis) for (7,3)R-S codewords during the first k=3 shifts of the encoding circuit of the Figure below are as follows: U(X) = 𝛼0 + 𝛼2X + 𝛼4X2 + 𝛼6X3 + 𝛼1X4 + 𝛼3X5 + 𝛼5X6 = (100) + (001)X + (011)X2 + (101) X3 + (010) X4 + (110) X5 + ( l l l )X6

R-S DECODING: Example: (7,3)R-S codewords: assume that during transmission, this codeword becomes corrupted so that 2 symbols are received in error. error pattern can be described in polynomial form as: The received corrupted-codeword polynomial r(X) is then represented by the sum of the transmitted-codeword polynomial and the error-pattern polynomial as follows: NOTE: in the previous example we assumed the error value and error location (this is just example to illustrate). But in fact in any R-S decoding we need determine the error location and determine the error value (the correct symbol values at those locations).

SYNDROME COMPUTATION: The syndrome is the result of a parity check performed on r(X) (received codeword) to determine whether r(X) is a valid member of the codeword set. If in fact r(X) is a member, then the syndrome S has value 0. Any nonzero value of S indicates the presence of errors. The syndrome S is made up of (n-k) symbols, {Si} (For i = 1 , . . . , n - k), There values can be computed from the received polynomial r(X): Since U(X) = m(X)g(X),So roots of g(X) must be roots of U(X). Since r(X) = U(X) + e(X),Then r(X) at roots of g(X) is equal zero only when r(X) is a valid code word (in this case e(X)=0). Any nonzero result is an indication that an error is present. So the syndrome evaluated by this relation: 𝑆𝑖(𝑋)|𝑋=𝛼𝑖=𝑟(𝛼𝑖) (𝑖=1,2,………𝑛−𝑘 ) Continue with our example: (7,3)R-S codewords: (n-k=4): 𝑆1(𝛼)=𝛼0 + 𝛼3+𝛼6+𝛼3+𝛼10+𝛼8+𝛼11=𝛼0+𝛼3+𝛼6+𝛼3+𝛼6+𝛼3+ 𝛼3+𝛼1+𝛼4=𝛼3 𝑆2(𝛼2) =𝛼0+𝛼4+𝛼8+𝛼6+𝛼14+𝛼13+𝛼17=𝛼0+𝛼4+𝛼1+𝛼6+𝛼0+ 𝛼6+𝛼3=𝛼5 𝑆3(𝛼3)=𝛼0+𝛼5+𝛼10+𝛼9+𝛼18+𝛼18+𝛼23=𝛼0+𝛼5+𝛼3+𝛼2+𝛼4+ 𝛼4+𝛼2=𝛼6 𝑆4(𝛼4)=𝛼0+𝛼6+𝛼12+𝛼12+𝛼22+𝛼23+𝛼29=𝛼0+𝛼6+𝛼5+𝛼5+𝛼1+𝛼2+ 𝛼1=0 The results confirm that the received codeword contains an error (which we inserted) since S ≠0.

Error Detection and Correction: Error Detection: Suppose there are v errors in the codeword at location, 𝑋𝑗1𝑋𝑗2……..𝑋𝑗𝑣Then, the error polynomial can be written as: e(𝑋)=𝑒𝑗1𝑋𝑗1+𝑒𝑗2𝑋𝑗2+⋯+𝑒𝑗𝑣𝑋𝑗𝑣 To correct the corrupted codeword, each error value ejl and its location𝑋𝑗𝑙 , where l = 1, 2, ... v ,must be determined. We define an error locator number as βl = 𝛼𝑗𝑙 (instead of Xjv ) .Next, we obtain the (n-k = 2t ) syndrome symbols by substituting 𝛼𝑗𝑙 into the received polynomial for il=1,2, ..., 2t

Example: (7,3)R-S codewords: An error-locator polynomial can be defined as: The roots of σ(X) are 1/β1, 1/ β2, ..., 1/ βv Then using autoregressive modeling techniques: Example: (7,3)R-S codewords: *NOTE: σ 1 and σ2 are not the roots of the locator polynomial.(They are the coefficient of the error locator polynomial)

The error locator polynomial become: We determine these roots by exhaustive testing of the σ(X) polynomial with each of the field elements as shown below. Any element X that yields (σ(X) = 0) is a root, and allows us to locate an error: 1/βl = α3 Thus βl=1/α3=α4 βl`= 1/α4=α3 Since there are 2-symbol errors here, the error polynomial is of the form: Note: We say that β1=αj1 =α3 and β2= αj2 = α4 So j1=3 and j2=4

ERROR VALUES: Now, preparing to determine the error values e1 and e2, associated with locations β1= α3 and β2 =α4, any of the four syndrome equations can be used. let us use Sl and S2: Writing two equations in matrix form:

CORRECTING THE RECEIVED CODE WORD: The estimated error polynomial is formed to yield: ê(𝑋)=𝛼2𝑋3+𝛼5𝑋4 let Û(X) be the estimated codeword polynomial: Û(𝑋)=𝑟(𝑋)+ê(𝑋)=𝑈(𝑋)+𝑒(𝑋)+ê(𝑋) We saw that: 𝑟(𝑋)= (100) + (001) 𝑋+ (011) 𝑋2+ (100) 𝑋3+ (101) 𝑋4+ (110) 𝑋5+(111)𝑋6 ê(𝑋)= (000) + (000) 𝑋+ (000) 𝑋2+ (001) 𝑋3+ (111) 𝑋4+ (000) 𝑋5+(000)𝑋6 Û(𝑋)=(100) + (001) 𝑋+ (011) 𝑋2+ (101) 𝑋3+ (010) 𝑋4+ (110) 𝑋5+(111)𝑋6 =𝛼0+𝛼2𝑋+𝛼4𝑋2+𝛼6𝑋3+𝛼1𝑋4+𝛼3𝑋5+𝛼5𝑋6 Since the message symbols constitute the rightmost k = 3 symbols.The decoded message is: