The Math Behind the Compact Disc Linear Algebra and Error-Correcting Codes william j. martin. mathematical sciences. wpi wednesday december 3. 2008 fairfield.

Slides:



Advertisements
Similar presentations
Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Advertisements

Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Cyclic Code.
Error Control Code.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )
15-853:Algorithms in the Real World
Information and Coding Theory
Probabilistic verification Mario Szegedy, Rutgers www/cs.rutgers.edu/~szegedy/07540 Lecture 4.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Spatial and Temporal Data Mining
DIGITAL COMMUNICATION Coding
1 Storing Digital Audio. 2 Storage  There are many different types of storage medium and encoding methods for the storage of digital audio  CD  DVD.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Mario Vodisek 1 HEINZ NIXDORF INSTITUTE University of Paderborn Algorithms and Complexity Erasure Codes for Reading and Writing Mario Vodisek ( joint work.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Syndrome Decoding of Linear Block Code
Analysis of Iterative Decoding
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
Information and Coding Theory Some applications of error correcting codes. Juris Viksna, 2015.
Great Theoretical Ideas in Computer Science.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Combinatorial Algorithms Reference Text: Kreher and Stinson.
CODING/DECODING CONCEPTS AND BLOCK CODING. ERROR DETECTION CORRECTION Increase signal power Decrease signal power Reduce Diversity Retransmission Forward.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
COEN 180 Erasure Correcting, Error Detecting, and Error Correcting Codes.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Data and Computer Communications by William Stallings Eighth Edition Digital Data Communications Techniques Digital Data Communications Techniques Click.
Great Theoretical Ideas in Computer Science.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
Hamming codes. Golay codes.
ADVANTAGE of GENERATOR MATRIX:
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Information Theory Linear Block Codes Jalal Al Roumy.
Computer Science Division
Compact Disc (CD) Coding –
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Some Computation Problems in Coding Theory
Elementary Coding Theory Including Hamming and Reed-Solomom Codes with Maple and MATLAB Richard Klima Appalachian State University Boone, North Carolina.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Hamming Distance & Hamming Code
Transmission Errors Error Detection and Correction.
10.1 Chapter 10 Error Detection and Correction Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Reed-Solomon Codes Rong-Jaye Chen.
Diana B. Llacza Sosaya Digital Communications Chosun University
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Class Report 林格名 : Reed Solomon Encoder. Reed-Solomom Error Correction When a codeword is decoded, there are three possible outcomes –If 2s + r < 2t (s.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Hamming codes. Golay codes.
IERG6120 Lecture 22 Kenneth Shum Dec 2016.
Some applications of error correcting codes.
Advanced Computer Networks
Subject Name: Information Theory Coding Subject Code: 10EC55
RS – Reed Solomon List Decoding.
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
Information Redundancy Fault Tolerant Computing
DIGITAL COMMUNICATION Coding
Cyclic Code.
Error Correction Coding
Presentation transcript:

The Math Behind the Compact Disc Linear Algebra and Error-Correcting Codes william j. martin. mathematical sciences. wpi wednesday december fairfield university

4/28/2015W J Martin Mathematical Sciences WPI How the device works The compact disc is a complex system incorporating interesting ideas from engineering, physics, CS and math. We will focus only on the mathematics of the error- correction strategy. For more info on the CD, see Kelin Kuhn’s book “Laser Engineering”:

4/28/2015W J Martin Mathematical Sciences WPI Borrowed from K J Kuhn’s book “Laser Engineering”

4/28/2015W J Martin Mathematical Sciences WPI The Pits Each pit is 0.5 microns wide… and 0.83 to 3.56 microns long. Tracks are separated by 1.6 microns of “land” Wavelength of green light is about 0.5 micron 40 tracks under one strand of human hair

4/28/2015W J Martin Mathematical Sciences WPI Modelling a CommunicationsChannel Linear algebra model: r = m+e (vector add.)

4/28/2015W J Martin Mathematical Sciences WPI Channel with Error Correction

4/28/2015W J Martin Mathematical Sciences WPI Turn it into an algebra problem! A number system that the computer can understand: F = { 0, 1 } Ordinary multiplication Addition: 1+1=0 Now music is turned into binary vectors!

4/28/2015W J Martin Mathematical Sciences WPI A bit (or a nibble?) of graph theory The n-cube is a type of Hamming graph Vertices are all binary n- tuples n-tuples are adjacent if they differ in only one coordinate Nice ‘eigenvalues’!

4/28/2015W J Martin Mathematical Sciences WPI Binary Vector Spaces The vectors are all possible binary n-tuples =

4/28/2015W J Martin Mathematical Sciences WPI Hamming Distance The distance between two binary n-tuples x and y is the number of coordinates in which they differ This is a metric : dist( x, y )  0 with dist( x, y ) = 0 iff x=y dist( x, y ) = dist( y, x ) Triangle inequality dist( x, z )  dist( x, y ) + dist( y, z ) dist( , ) = 3

4/28/2015W J Martin Mathematical Sciences WPI Theorem Let C (the “code”) be a subset of F with minimum distance between any two codewords equal to d. Then there exists an algorithm which corrects up to t errors per transmitted codeword if and only if d  2t + 1. n

4/28/2015W J Martin Mathematical Sciences WPI Proof If x and y are distinct codewords, then the balls of radius t around them are disjoint. So if the received vector is within distance t of x, it must be at distance > t from any other codeword. So decoding is unique.

4/28/2015W J Martin Mathematical Sciences WPI A Useful Extension of the Theorem The above (computationally infeasible) decoding algorithm also correctly recovers from any t symbol errors and any s symbol erasures provided d > 2t+s. transmit: receive: ? ? ? (here, t=2 errors and s=3 erasures)

4/28/2015W J Martin Mathematical Sciences WPI Small Example Let C denote the “rowspace” of the matrix Then C = { , , , , , , , } and C has minimum distance 3 so C allows correction of any single-bit error in any transmitted codeword.

4/28/2015W J Martin Mathematical Sciences WPI The binary Hamming code Codewords: Quadratic Residues! In we have 1= 1 6 = 1 2= 4 5 = 4 3 = 2 4 = 2 ZZ

4/28/2015W J Martin Mathematical Sciences WPI The Fano projective plane F Vector Space: F “Poynts”: 1-dim. subspaces “Lynes”: 2-dim. subspaces 3 2

4/28/2015W J Martin Mathematical Sciences WPI All codewords: C = nullsp(H) where

4/28/2015W J Martin Mathematical Sciences WPI Codes from polynomials Let’s replace F={0,1} with F={0,1,…,6} (with modular arithmetic). Now consider the vector space F[z] of all polynomials in z with coefficients in F. For any subset N of F, we have a linear transformation L: F[z]  F via f(z)  [ f(0), f(1), f(2), f(3), f(4), f(5) ] (Here, we use, N={0,1,2,3,4,5}.) This is a Reed-Solomon code. N

4/28/2015W J Martin Mathematical Sciences WPI Polynomials to Codewords Example: Let the message be [1, 2, 2] (working mod 7) Polynomial is f(z) = z + 2 z + 2 Codeword is [f(0), f(1), f(2), f(3), f(4), f(5)] = [ 2, 5, 3, 3, 5, 2] 2

4/28/2015W J Martin Mathematical Sciences WPI Reed-Solomon Codes FACT: Two polynomials of degree less than k having k points of intersection must be equal. SO: Reed-Solomon code of length n<q and dim k has min. dist. n-k+1

4/28/2015W J Martin Mathematical Sciences WPI Compact Disc Parameters SONY/Philips design (1980) Music is sampled 44,100 times per second Each sample consists of 32 bits, representing left and right channel signal magnitude 0—65535 (Pulse Code Modulation – PCM) So chip must process 1,411,200 raw data bits per second But it gets much worse!

4/28/2015W J Martin Mathematical Sciences WPI Cross-Interleaved RS Codes Inner code is a 28-dimensional subspace of a 32-dimensional vector space over a finite field of size 256. Outer code is a 24-dimensional subspace of a 28-dimensional vector space. Six 32-bit samples make up a 192-bit frame which is encoded as a 224-bit codeword. (Eventually, codewords have length 588 bits!)

4/28/2015W J Martin Mathematical Sciences WPI Encoding – The numbers The codewords from the first code are interleaved into a virtually infinite array of 28 rows of symbols over GF(256). We pull out 8 binary columns (one symbol) to obtain a 28x8=224-bit frame which is then encoded using another Reed-Solomon code to obtain a codeword of length 256 bits.

4/28/2015W J Martin Mathematical Sciences WPI Interleaving to disperse errors Codewords of first code are stacked like bricks 28 rows of vectors over GF(256) Extract columns and re- encode using second Reed-Solomon code

4/28/2015W J Martin Mathematical Sciences WPI Splitting Odd and Even Bits

4/28/2015W J Martin Mathematical Sciences WPI Back to the Pits Each pit is 0.5 microns wide… and 0.83 to 3.56 microns long. Tracks are separated by 1.6 microns of “land” Not all 01-sequences can be recorded

4/28/2015W J Martin Mathematical Sciences WPI EFM: Eight-to-Fourteen Modulation This encoding scheme can only store sequences where each consecutive pair of ones is separated by at least 2 and at most 10 zeros This is achieved by a mapping F  F which is given by a lookup table.lookup table

4/28/2015W J Martin Mathematical Sciences WPI Further Processing Three more ‘merge bits’ are added to each of these 14 So 256+8=264=33x8 bits, carrying six samples, or 192 information bits, gets encoded as 588 channel bits on the disk This represents seconds of music

4/28/2015W J Martin Mathematical Sciences WPI What actually goes on the disc? We must do this 7,350 times per second So CD player reads 4,321,800 bits per second of music produced To get 74 minutes of music, we must store 74x60x = 19,188,792,000 bits of data on the compact disc!

4/28/2015W J Martin Mathematical Sciences WPI When in doubt, erase Inner code has minimum distance 5 (over GF(256)) Rather than correct two- symbol errors, the CD just erases the entire received vector.

4/28/2015W J Martin Mathematical Sciences WPI So…how good is it? The two Reed-Solomon codes team up to correct ‘burst’ errors of up to 4000 consecutive data bits (2.5 mm scratch on disc) If signal at time t cannot be recovered, interpolate With smart data distribution, this allows for recovery from burst errors of up to 12,000 data bits (7.5 mm track length on disc) If all else fails, mute, giving sec of silence.

4/28/2015W J Martin Mathematical Sciences WPI Other Applications Space communications (Mariner,Voyager,etc.) DVD, CD-R, CD-ROM Cell phones, internet packets Memory: chips, hard drives, USB sticks RAID disk arrays Quantum computing

4/28/2015W J Martin Mathematical Sciences WPI The Last Slide Thank You All!