15-853Page1 15-853:Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.

Slides:



Advertisements
Similar presentations
Mahdi Barhoush Mohammad Hanaysheh
Advertisements

15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
296.3Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Cyclic Code.
Error Control Code.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
Efficient Soft-Decision Decoding of Reed- Solomon Codes Clemson University Center for Wireless Communications SURE 2006 Presented By: Sierra Williams Claflin.
15-853:Algorithms in the Real World
Information and Coding Theory
Data and Computer Communications Tenth Edition by William Stallings Data and Computer Communications, Tenth Edition by William Stallings, (c) Pearson Education.
Information Theory Introduction to Channel Coding Jalal Al Roumy.
Cellular Communications
DIGITAL COMMUNICATION Coding
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Error detection and correction
Mario Vodisek 1 HEINZ NIXDORF INSTITUTE University of Paderborn Algorithms and Complexity Erasure Codes for Reading and Writing Mario Vodisek ( joint work.
3F4 Error Control Coding Dr. I. J. Wassell.
exercise in the previous class (1)
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Linear codes 1 CHAPTER 2: Linear codes ABSTRACT Most of the important codes are special types of so-called linear codes. Linear codes are of importance.
Syndrome Decoding of Linear Block Code
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
15-853Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Redundancy The object of coding is to introduce redundancy so that even if some of the information is lost or corrupted, it will still be possible to recover.
Basic Characteristics of Block Codes
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
Hamming codes. Golay codes.
ADVANTAGE of GENERATOR MATRIX:
Linear Block Code 指導教授:黃文傑 博士 學生:吳濟廷
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Error Detection (§3.2.2)
Information Theory Linear Block Codes Jalal Al Roumy.
Channel Coding Binit Mohanty Ketan Rajawat. Recap…  Information is transmitted through channels (eg. Wires, optical fibres and even air)  Channels are.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Perfect and Related Codes
Error Detection and Correction – Hamming Code
Some Computation Problems in Coding Theory
Elementary Coding Theory Including Hamming and Reed-Solomom Codes with Maple and MATLAB Richard Klima Appalachian State University Boone, North Carolina.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Data Communications and Networking
INFORMATION THEORY Pui-chor Wong.
Hamming Distance & Hamming Code
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Error Detecting and Error Correcting Codes
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
V. Non-Binary Codes: Introduction to Reed Solomon Codes
15-853:Algorithms in the Real World
15-853:Algorithms in the Real World
Algorithms in the Real World
Chapter 6.
Welcome to the presentation. Linear Block Codes Almost all block codes used today belong to a subset called linear block codes. The exclusive OR of two.
Subject Name: Information Theory Coding Subject Code: 10EC55
II. Linear Block Codes.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Information Redundancy Fault Tolerant Computing
II. Linear Block Codes.
Types of Errors Data transmission suffers unpredictable changes because of interference The interference can change the shape of the signal Single-bit.
Presentation transcript:

15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes

15-853Page2 General Model codeword (c) coder noisy channel decoder message (m) message or error codeword’ (c’) Error types introduced by the noisy channel: changed fields in the codeword (e.g. a flipped bit) missing fields in the codeword (e.g. a lost byte). Called erasures How the decoder deals with errors. error detection vs. error correction

15-853Page3 Applications Storage: CDs, DVDs, “hard drives”, Wireless: Cell phones, wireless links Satellite and Space: TV, Mars rover, … Digital Television: DVD, MPEG2 layover High Speed Modems: ADSL, DSL,.. Reed-Solomon codes are by far the most used in practice, including pretty much all the examples mentioned above. Algorithms for decoding are quite sophisticated.

15-853Page4 Hierarchy of Codes cycliclinear BCH HammingReed Solomon These are all “block” codes.

15-853Page5 Block Codes Each message and codeword is of fixed size  = codeword alphabet k =|m| n = |c| q = |  | C ½  n (codewords)  (x,y) = number of positions s.t. x i  y i d = min{  (x,y) : x,y 2 C, x  y} s = max{  (c,c’)} that the code can correct Code described as: (n,k,d) q codeword (c) coder noisy channel decoder message (m) message or error codeword’ (c’)

15-853Page6 Example of (6,3,3) 2 systematic code Definition: A Systematic code is one in which the message appears in the codeword Binary codes: Today we will mostly be considering  = {0,1} and will sometimes use (n,k,d) as shorthand for (n,k,d) 2 In binary  x,y) is often called the Hamming distance messagecodeword

15-853Page7 Error Detection with Parity Bit A (k+1,k,2) 2 systematic code Encoding: m 1 m 2 …m k ! m 1 m 2 …m k p k+1 where p k-1 = m 1 © m 2 © … © m k d = 2 since the parity is always even (it takes two bit changes to go from one codeword to another). Detects one-bit error since this gives odd parity Cannot be used to correct 1-bit error since any odd-parity word is equal distance  to k+1 valid codewords.

15-853Page8 Error Correcting One Bit Messages How many bits do we need to correct a one bit error on a one bit message? We need 3 bits: a (n=3,k=1,d=3) code Encode: m 1 ! m 1 m 1 m 1 Decode The majority function (e.g. 101 ! 1) d = 3 since 000 and 111 differ by 3. In general: We need d ¸ 3 to correct one error. Why?

15-853Page9 Error Correcting Multibit Messages We will first discuss Hamming Codes Detect and correct 1-bit errors. Codes are of form: (2 r -1, 2 r -1 – r, 3) for any r > 1 e.g. (3,1,3), (7,4,3), (15,11,3), (31, 26, 3), … which correspond to 2, 3, 4, 5, … “parity bits” (i.e. n-k) The high-level idea is to “localize” the error. Any specific ideas?

15-853Page10 Hamming Codes: Encoding m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m 11 m 10 m9m9 p8p8 p0p0 m 15 m 14 m 13 m 12 Localizing error to top or bottom half 1xxx or 0xxx p 8 = m 15 © m 14 © m 13 © m 12 © m 11 © m 10 © m 9 Localizing error to x1xx or x0xx m1m1 m2m2 m3m3 p4p4 m5m5 m6m6 m7m7 m 11 m 10 m9m9 p8p8 p0p0 m 15 m 14 m 13 m 12 p 4 = m 15 © m 14 © m 13 © m 12 © m 7 © m 6 © m 5 Localizing error to xx1x or xx0x m1m1 p2p2 m3m3 p4p4 m5m5 m6m6 m7m7 m 11 m 10 m9m9 p8p8 p0p0 m 15 m 14 m 13 m 12 p 2 = m 15 © m 14 © m 11 © m 10 © m 7 © m 6 © m 3 Localizing error to xxx1 or xxx0 p1p1 p2p2 m3m3 p4p4 m5m5 m6m6 m7m7 m 11 m 10 m9m9 p8p8 p0p0 m 15 m 14 m 13 m 12 p 1 = m 15 © m 13 © m 11 © m 9 © m 7 © m 5 © m 3

15-853Page11 Hamming Codes: Decoding We don’t need p 0, so we have a (15,11,?) code. After transmission, we generate b 8 = p 8 © m 15 © m 14 © m 13 © m 12 © m 11 © m 10 © m 9 b 4 = p 4 © m 15 © m 14 © m 13 © m 12 © m 7 © m 6 © m 5 b 2 = p 2 © m 15 © m 14 © m 11 © m 10 © m 7 © m 6 © m 3 b 1 = p 1 © m 15 © m 13 © m 11 © m 9 © m 7 © m 5 © m 3 With no errors, these will all be zero With one error b 8 b 4 b 2 b 1 gives us the error location. e.g would tell us that p 4 is wrong, and 1100 would tell us that m 12 is wrong p1p1 p2p2 m3m3 p4p4 m5m5 m6m6 m7m7 m 11 m 10 m9m9 p8p8 p0p0 m 15 m 14 m 13 m 12

15-853Page12 Hamming Codes Can be generalized to any power of 2 –n = 2^r – 1 (15 in the example) –(n-k) = r (4 in the example) –d = 3 (discuss later) –Gives (2^r-1, 2^r-1-d, 3) code Extended Hamming code –Add back the parity bit at the end –Gives (2^r, 2^r-1-d, 4) code –Can correct one error and detect 2.

15-853Page13 Lower bound on parity bits Consider codewords as vertices on a hypercube codeword The distance between nodes on the hypercube is the Hamming distance . The minimum distance is d. 001 is equidistance from 000, 001 and 101. For s-bit error detection d ¸ s + 1 For s-bit error correction d ¸ 2s + 1 d = 2 = min distance n = 3 = dimensionality 2 n = 8 = number of nodes

15-853Page14 Lower bound on parity bits How many nodes in hypercube do we need so that d = 3? Each of the 2 k codewords eliminates n neighbors plus itself, i.e. n+1 In previous hamming code 15 ¸ 11 + d log 2 (15+1) e = 15 Hamming Codes are called perfect codes since they match the lower bound exactly

15-853Page15 Lower bound on parity bits What about fixing 2 errors (i.e. d=5)? Each of the 2 k codewords eliminates itself, its neighbors and its neighbors’ neighbors, giving: Generally to correct s errors:

15-853Page16 Lower Bounds The lower bounds assume random placement of bit errors. In practice errors are likely to be less than random, e.g. evenly spaced or clustered: xxxxxx xxxxxx Can we do better if we assume regular errors? We will come back to this later when we talk about Reed-Solomon codes. In fact, this is the main reason why Reed-Solomon codes are used much more than Hamming-codes

15-853Page17 Linear Codes If  is a field, then  n is a vector space Definition: C is a linear code if it is a linear subspace of  n of dimension k. This means that there is a set of k basis vectors v i 2  n (1 · i · k) that span the subspace. i.e. every code can be written as: c = a 1 v 1 + … + a k v k a i 2 

15-853Page18 Linear Codes Basis vectors for the (7,4,3) 2 Hamming code: v 1 = v 2 = v 3 = v 4 = Why is d = 3? For all binary linear codes, the minimum distance is equal to the least weight non-zero codeword. We can always get to c 0 from c 1 + c 2

15-853Page19 Generator and Parity Check Matrices Generator Matrix: A k x n matrix G such that: C = {xG | x 2  k } Made from stacking the basis vectors Parity Check Matrix: A (n – k) x n matrix H such that: C = {y 2  n | Hy T = 0} Codewords are the nullspace of H These always exist for linear codes HG T = 0 since: 0 = Hy T = H(xG) T = H(G T x T ) = (HG T )x T only true for all x if HG T = 0

15-853Page20 Advantages of Linear Codes Encoding is efficient (vector-matrix multiply) Error detection is efficient (vector-matrix multiply) Syndrome (Hy T ) has error information Gives q n-k sized table for decoding Useful if n-k is small

15-853Page21 Example and “Standard Form” For the Hamming (7,4,3) code: By swapping columns 4 and 5 it is in the form I k,A. A code with a matrix in this form is systematic, and G is in “standard form”

15-853Page22 Relationship of G and H If G is in standard form [I k,A] then H = [A T,I n-k ] Proof: HG T = A T I k + I n-k A T = A T + A T = 0 Example of (7,4,3) Hamming code: transpose

15-853Page23 The d of linear codes Theorem: Linear codes have distance d if every set of (d-1) columns of H are linearly independent, but there is a set of d columns that are linearly dependent. Proof summary: if d columns are linearly dependent then two codes that differ in the d bits corresponding to those columns will make the same contribution to the syndrome.

15-853Page24 For every code with G = I k,A and H = A T,I n-k we have a dual code with G = I n-k, A T and H = A,I k The dual of the Hamming codes are the binary simplex codes: (2 r -1, r, 2 m-1 ) The dual of the extended Hamming codes are the first-order Reed-Muller codes. Note that these codes are highly redundant and can fix many errors. Dual Codes

15-853Page25 How to find the error locations Hy T is called the syndrome (no error if 0). In general we can find the error location by creating a table that maps each syndrome to a set of error locations. Theorem: assuming s · 2d-1 every syndrome value corresponds to a unique set of error locations. Proof: Exercise. Table has q n-k entries, each of size at most n (i.e. keep a bit vector of locations).