1/11/05 1 Capacity of Noiseless and Noisy Two-Dimensional Channels Paul H. Siegel Electrical and Computer Engineering Center for Magnetic Recording Research.

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Boosting Textual Compression in Optimal Linear Time.
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Shortest Vector In A Lattice is NP-Hard to approximate
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Recursive Definitions and Structural Induction
Information Theory EE322 Al-Sanie.
Capacity of Wireless Channels
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Data Compression.
CWIT Robust Entropy Rate for Uncertain Sources: Applications to Communication and Control Systems Charalambos D. Charalambous Dept. of Electrical.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Entropy Rates of a Stochastic Process
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
Markov Chains Lecture #5
Chapter 6 Information Theory
2/8/05 1 Constrained Coding Techniques for Advanced Data Storage Devices Paul H. Siegel Director, CMRR Electrical and Computer Engineering University of.
10/15/01 1 The Continuing Miracle of Information Storage Technology Paul H. Siegel Director, CMRR University of California, San Diego.
. Parameter Estimation and Relative Entropy Lecture #8 Background Readings: Chapters 3.3, 11.2 in the text book, Biological Sequence Analysis, Durbin et.
Fundamental limits in Information Theory Chapter 10 :
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
1/11/05 1 Capacity of Noiseless and Noisy Two-Dimensional Channels Paul H. Siegel Electrical and Computer Engineering Center for Magnetic Recording Research.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
Lossless data compression Lecture 1. Data Compression Lossless data compression: Store/Transmit big files using few bytes so that the original files.
AN IFORMATION THEORETIC APPROACH TO BIT STUFFING FOR NETWORK PROTOCOLS Jack Keil Wolf Center for Magnetic Recording Research University of California,
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
BASiCS Group University of California at Berkeley Generalized Coset Codes for Symmetric/Asymmetric Distributed Source Coding S. Sandeep Pradhan Kannan.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
2/28/03 1 The Virtues of Redundancy An Introduction to Error-Correcting Codes Paul H. Siegel Director, CMRR University of California, San Diego The Virtues.
3/23/04 1 Information Rates for Two-Dimensional ISI Channels Jiangxin Chen and Paul H. Siegel Center for Magnetic Recording Research University of California,
Constrained Codes for PRML Panu Chaichanavong December 14, 2000 Partial Response Channel Maximum Likelihood Detection Constraints for PRML Examples Conclusion.
1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Time-Varying Encoders for Constrained Systems - Jonathan J. Ashley, Brian H. Marcus Jungwon Lee
Information-Theoretic Limits of Two-Dimensional Optical Recording Channels Paul H. Siegel Center for Magnetic Recording Research University of California,
Noise, Information Theory, and Entropy
Channel Polarization and Polar Codes
Noise, Information Theory, and Entropy
Systems of Linear Equation and Matrices
Institute for Experimental Mathematics Ellernstrasse Essen - Germany Data communication line codes and constrained sequences A.J. Han Vinck Revised.
Dr.-Ing. Khaled Shawky Hassan
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
EE 6332, Spring, 2014 Wireless Communication Zhu Han Department of Electrical and Computer Engineering Class 11 Feb. 19 th, 2014.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory Efficient and Reliable Transfer of Information
. Parameter Estimation and Relative Entropy Lecture #8 Background Readings: Chapters 3.3, 11.2 in the text book, Biological Sequence Analysis, Durbin et.
DISCRETE COMPUTATIONAL STRUCTURES
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
geometric representations of graphs
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
1 CSCD 433 Network Programming Fall 2013 Lecture 5a Digital Line Coding and other...
Bounds on Redundancy in Constrained Delay Arithmetic Coding Ofer ShayevitzEado Meron Meir Feder Ram Zamir Tel Aviv University.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Compression for Fixed-Width Memories Ori Rottenstriech, Amit Berman, Yuval Cassuto and Isaac Keslassy Technion, Israel.
SEAC-3 J.Teuhola Information-Theoretic Foundations Founder: Claude Shannon, 1940’s Gives bounds for:  Ultimate data compression  Ultimate transmission.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
1 CSCD 433 Network Programming Fall 2016 Lecture 4 Digital Line Coding and other...
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Fei Li Jinjun Xiong University of Wisconsin-Madison
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
geometric representations of graphs
Switching Lemmas and Proof Complexity
Presentation transcript:

1/11/05 1 Capacity of Noiseless and Noisy Two-Dimensional Channels Paul H. Siegel Electrical and Computer Engineering Center for Magnetic Recording Research University of California, San Diego

LANL Workshop 1/11/05 2 Outline Shannon Capacity Discrete-Noiseless Channels One-dimensional Two-dimensional Summary

LANL Workshop 1/11/05 3 Claude E. Shannon

LANL Workshop 1/11/05 4 The Inscription

LANL Workshop 1/11/05 5 The Formula on the “Paper” Capacity of a discrete channel with noise [Shannon, 1948] For noiseless channel, H y (x)=0, so: Gaylord, MI: C = W log (P+N)/N Bell Labs: no formula on paper (“H = – p log p – q log q” on plaque)

LANL Workshop 1/11/05 6 Discrete Noiseless Channels (Constrained Systems) A constrained system S is the set of sequences generated by walks on a labeled, directed graph G. Telegraph channel constraints [Shannon, 1948] DOT DASH DOT DASH LETTER SPACE WORD SPACE

LANL Workshop 1/11/05 7 Magnetic Recording Constraints Forbidden words F={101, 010} Biphase Even Forbidden word F={11} Runlength constraints (“finite-type”: determined by finite list F of forbidden words) Spectral null constraints (“almost-finite-type”)

LANL Workshop 1/11/05 8 (d,k) runlength-limited constraints For, a (d,k) runlength-limited sequence is a binary string such that: F={11} forbidden list corresponds to

LANL Workshop 1/11/05 9 Practical Constrained Codes Finite-state encoder Sliding-block decoder (from binary data into S) (inverse mapping from S to data) m data bitsn code bits Encoder Logic Rate m:n (states) Decoder Logic n bits m bits We want: high rate R=m/n low complexity

LANL Workshop 1/11/05 10 Codes and Capacity How high can the code rate be? Shannon defined the capacity of the constrained system S: where N(S,n) is the number of sequences in S of length n. Theorem [Shannon,1948] : If there exists a decodable code at rate R= m/n from binary data to S, then R  C. Theorem [Shannon,1948] : For any rate R=m/n < C there exists a block code from binary data to S with rate km:kn, for some integer k  1.

LANL Workshop 1/11/05 11 Computing Capacity: Adjacency Matrices Let be the adjacency matrix of the graph G representing S. The entries in correspond to paths in G of length n

LANL Workshop 1/11/05 12 Computing Capacity (cont.) Shannon showed that, for suitable representing graphs G, where, i.e., the spectral radius of the matrix. Assigning “transition probabilities” to the edges of G, the constrained system S becomes a Markov source x, with entropy H(x). Shannon proved that and expressed the maximizing probabilities in terms of the spectral radius and corresponding eigenvector of.

LANL Workshop 1/11/05 13 Maxentropic Measure Let denote the largest real eigenvalue of, with corresponding eigenvector Then the maxentropic (capacity-achieving) transition probabilities are given by The stationary state distribution is expressed in terms of corresponding left and right eigenvectors.

LANL Workshop 1/11/05 14 Computing Capacity (cont.) Example: More generally,, where is the largest real root of the polynomial and

LANL Workshop 1/11/05 15 Constrained Coding Theorems Stronger coding theorems were motivated by the problem of constrained code design for magnetic recording. Theorem[Adler-Coppersmith-Hassner, 1983] Let S be a finite-type constrained system. If m/n  C, then there exists a rate m:n sliding-block decodable, finite-state encoder. (Proof is constructive: state-splitting algorithm.) Theorem[Karabed-Marcus, 1988] Ditto if S is almost-finite-type. (Proof not so constructive…)

LANL Workshop 1/11/05 16 Two-Dimensional Constrained Systems Band-recording and page-oriented recording technologies require 2-dimensional constraints, for example: Two-Dimensional Optical Storage (TwoDOS) - Philips Holographic Storage - InPhaseTechnologies Patterned Magnetic Media – Hitachi, Toshiba, … Thermo-Mechanical Probe Array – IBM

LANL Workshop 1/11/05 17 TwoDOS Courtesy of Wim Coene, Philips Research

LANL Workshop 1/11/05 18 Constraints on the Integer Lattice Z 2 : constraint in x - y directions: Hard-Square Model Independent Sets

LANL Workshop 1/11/05 19 (d,k) Constraints on the Integer Lattice Z 2 For 2-dimensional (d,k) constraints, the capacity is given by: The only nontrivial (d,k) pairs for which is known precisely are those with zero capacity, namely [Kato-Zeger, 1999] :

LANL Workshop 1/11/05 20 (d,k) Constraints on Z 2 – Capacity Bounds Transfer matrix methods provide numerical bounds on [Calkin-Wilf, 1998], [Nagy-Zeger, 2000] Variable-rate “bit-stuffing” encoders for yield best known lower bounds on for d >1 [Halevy, et al., 2004]: d Lower bound d

LANL Workshop 1/11/ D Bit-Stuffing RLL Encoder Source encoder converts binary data to i.i.d bit stream (biased bits) with, rate penalty. Bit-stuffing encoder inserts redundant bits which can be identified uniquely by decoder. Encoder rate R(p) is a lower bound of the capacity. (For d=1, we can determine R(p) precisely.)

LANL Workshop 1/11/ D Bit-Stuffing (1,∞) RLL Encoder Biased sequence: Optimal bias Pr(1) = p = R(p)= (within 1% of capacity) Optimal bias Pr(1) = p = R(p)= (within 1% of capacity)

LANL Workshop 1/11/05 23 Enhanced Bit-Stuffing Encoder Use 2 source encoders, with parameters p 0, p Optimal bias Pr(1) = p 0 = Optimal bias Pr(1) = p 0 = Optimal bias Pr(1) = p 1 = Optimal bias Pr(1) = p 1 = R(p 0, p 1 )= (within 0.1% of capacity)

LANL Workshop 1/11/05 24 Non-Isolated Bit (n.i.b.) Constraint on Z 2 The non-isolated bit constraint is defined by the forbidden set: Analysis of the coding ratio of a bit-stuffing encoder yields: ≤ C sq nib ≤

LANL Workshop 1/11/05 25 Constraints on the Hexagonal Lattice A 2 : constraints: Hard-Hexagon Model

LANL Workshop 1/11/05 26 Hard Hexagon Capacity Capacity of hard hexagon model is known precisely! [Baxter,1980]* So,

LANL Workshop 1/11/05 27 Hard Hexagon Capacity Alternatively, the hard hexagon entropy constant satisfies a degree-24 polynomial with (big!) integer coefficients. Baxter does offer this disclaimer regarding his derivation, however: *“It is not mathematically rigorous, in that certain analyticity properties of κ are assumed, and the results of Chapter 13 (which depend on assuming that various large-lattice limits can be interchanged) are used. However, I believe that these assumptions, and therefore ( )-( ), are in fact correct.”

LANL Workshop 1/11/05 28 (d,k) Constraints on A 2 – Capacity Bounds Zero capacity region partially known [Kukorelly-Zeger, 2001]. Variable-to-fixed length “bit-stuffing” encoders for yield best known lower bounds on for d>1 dLower boundd [Halevy, et al., 2004]:

LANL Workshop 1/11/05 29 Practical 2-D Constrained Codes There is no comprehensive algorithmic theory for constructing encoders and decoders for 2-D constrained systems. Very efficient bit-stuffing encoders have been defined and analyzed for several 2-D constraints, but they are not suitable for practical applications [Roth et al., 2001], [Halevy et al., 2004], [Nagy-Zeger, 2004]. Optimal block codes with m x n rectangular code arrays have been designed for small values of m and n, and some finite-state encoders have been designed, but there is no generally applicable method [Demirkan-Wolf, 2004]. There is no comprehensive algorithmic theory for constructing encoders and decoders for 2-D constrained systems. Very efficient bit-stuffing encoders have been defined and analyzed for several 2-D constraints, but they are not suitable for practical applications [Roth et al., 2001], [Halevy et al., 2004], [Nagy-Zeger, 2004]. Optimal block codes with m x n rectangular code arrays have been designed for small values of m and n, and some finite-state encoders have been designed, but there is no generally applicable method [Demirkan-Wolf, 2004].

LANL Workshop 1/11/05 30 Concluding Remarks The lack of convenient graph-based representations of 2-D constraints prevents the straightforward extension of 1-D techniques for analysis and code design. There are strong connections to statistical physics that may open up new approaches to understanding 2-D constrained systems (and, perhaps, vice-versa).