Coding and Algorithms for Memories Lecture 8

Slides:



Advertisements
Similar presentations
Lecture 19: Parallel Algorithms
Advertisements

Lectures on Network Flows
1 By Gil Kalai Institute of Mathematics and Center for Rationality, Hebrew University, Jerusalem, Israel presented by: Yair Cymbalista.
1 The RSA Algorithm Supplementary Notes Prepared by Raymond Wong Presented by Raymond Wong.
Comp 122, Spring 2004 Elementary Sorting Algorithms.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Theta Function Lecture 24: Apr 18. Error Detection Code Given a noisy channel, and a finite alphabet V, and certain pairs that can be confounded, the.
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Coding for Flash Memories
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Linear-Time Encodable and Decodable Error-Correcting Codes Jed Liu 3 March 2003.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
CS774. Markov Random Field : Theory and Application Lecture 10 Kyomin Jung KAIST Oct
Generalized Derangement Graphs Hannah Jackson.  If P is a set, the bijection f: P  P is a permutation of P.  Permutations can be written in cycle notation.
Coding and Algorithms for Memories Lecture 8 1.
Combinatorial Algorithms Reference Text: Kreher and Stinson.
Coding and Algorithms for Memories Lecture 9 1.
Elementary Sorting Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Lecture 6 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Error Correction and Partial Information Rewriting for Flash Memories Yue Li joint work with Anxiao (Andrew) Jiang and Jehoshua Bruck.
1 Closures of Relations: Transitive Closure and Partitions Sections 8.4 and 8.5.
Coding and Algorithms for Memories Lecture 4 1.
Algorithmics - Lecture 131 LECTURE 13: Backtracking.
DIGITAL COMMUNICATIONS Linear Block Codes
Chapter 5: Permutation Groups  Definitions and Notations  Cycle Notation  Properties of Permutations.
Partitioning the Labeled Spanning Trees of an Arbitrary Graph into Isomorphism Classes Austin Mohr.
SECTION 9 Orbits, Cycles, and the Alternating Groups Given a set A, a relation in A is defined by : For a, b  A, let a  b if and only if b =  n (a)
Coding and Algorithms for Memories Lecture 7 1.
Yue Li joint work with Anxiao (Andrew) Jiang and Jehoshua Bruck.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
Coding and Algorithms for Memories Lecture 7 1.
Compression for Fixed-Width Memories Ori Rottenstriech, Amit Berman, Yuval Cassuto and Isaac Keslassy Technion, Israel.
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
Coding and Algorithms for Memories Lecture 8 1.
CSE15 Discrete Mathematics 03/06/17
Prüfer code algorithm Algorithm (Prüfer code)
Richard Anderson Lecture 26 NP-Completeness
Approximating the MST Weight in Sublinear Time
Greedy Algorithms (Chap. 16)
Proving the Correctness of Huffman’s Algorithm
Lectures on Network Flows
Mathematical Structures for Computer Science Chapter 6
Path Coupling And Approximate Counting
Spectral Clustering.
EEL 3705 / 3705L Digital Logic Design
Discrete Mathematics for Computer Science
ICS 353: Design and Analysis of Algorithms
Lecture 9 Greedy Strategy
Richard Anderson Lecture 25 NP-Completeness
Elementary graph algorithms Chapter 22
Closures of Relations: Transitive Closure and Partitions
Basic Graph Algorithms
Richard Anderson Lecture 13 Divide and Conquer
Chapter 16: Greedy algorithms Ming-Te Chi
Advanced Algorithms Analysis and Design
Correcting Error in Message Passing Systems
Minimum Spanning Tree Algorithms
Lecture 6 Algorithm Analysis
Lecture 6 Algorithm Analysis
Chapter 16: Greedy algorithms Ming-Te Chi
II. Linear Block Codes.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Elementary Sorting Algorithms
Elementary graph algorithms Chapter 22
Proving the Correctness of Huffman’s Algorithm
Lecture 17 Making New Codes from Old Codes (Section 4.6)
Analysis of Algorithms CS 477/677
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Lecture 18 The Main Coding Theory Problem (Section 4.7)
Presentation transcript:

236601 - Coding and Algorithms for Memories Lecture 8

Flash Memory Cell 1 3 2 Introduce errors Can only add water (charge)

The Leakage Problem 3 2 Error! 1 Get rid of the thresholds

The Overshooting Problem 3 2 1 iterations Need to erase the whole block

Possible Solution – Iterative Programming 3 Slow… 2 1 leakage

Relative Vs. Absolute Values Less errors More retention More cells 1 Jiang, Mateescu, Schwartz, Bruck, “Rank modulation for Flash Memories”, 2008

The New Paradigm Rank Modulation Absolute values  Relative values Single cell  Multiple cells Physical cell  Logical cell

Rank Modulation Ordered set of n cells Assume discrete levels Relative levels define a permutation Basic operation: push-to-the-top 1 2 3 4 1 4 3 2 Overshoot is not a concern Writing is much faster Increased reliability (data retention)

New Number Representation System an-1…a3a2a1 = an-1·(n-1)! + … + a3·3! + a2·2! + a1·1! 0 ≤ ai ≤ i permutation in lexicographical order [Lehmer 1906, Laisant 1888] FACTORADIC decimal 0 1 2 0 2 1 1 0 2 1 2 0 2 0 1 2 1 0 0 0 0 1 1 0 1 1 2 0 2 1 1 2 3 4 5

Gray Codes for Rank Modulation The problem: Is it possible to transition between all permutations? Find cycle through n! states by push-to-the-top transitions 1 2 3 2 3 1 2 3 1 n=3 3 cycles 1 2 3 Transition graph, n=3

Gray Codes for Arbitrary n Recursive construction: Keep bottom cell fixed (n-1)! transitions with others 2 3 1 ~ (n-1)! 1 3 2 3 1 2 2 1 3 2 3 1 3 2 1 1 2 3 4 4 4 4 4 4 4 1 2 1 4 2 2 4 1 2 1 4 1 2 4 4 2 1 3 3 3 3 3 3 3 4 2 4 3 2 2 3 4 2 4 3 4 2 3 3 2 4 1 1 1 1 1 1 1 2 3 4 1 2 3 4

Gray Codes for Arbitrary n 1 3 2 3 1 2 2 1 3 2 3 1 3 2 1 1 2 3 4 4 4 4 4 4 4 1 2 1 4 2 2 4 1 2 1 4 1 2 4 4 2 1 3 3 3 3 3 3 3 4 2 4 3 2 2 3 4 2 4 3 4 2 3 3 2 4 1 1 1 1 1 1 4 1 3 4 3 1 1 4 1 3 4 3 3 3 4 1 1 4 2 2 2 2 2 2

3 2 4 1 3 2 1 4 Discrete model 2 3 1 4

Kendall’s Tau Distance For a permutation  an adjacent transposition is the local exchange of two adjacent elements For ,π∊Sm, dτ(,π) is the Kendall’s tau distance between  and π = Number of adjacent transpositions to change  to be π =2413 and π=2314 2413 dτ(,π) = 3 It is called also the bubble-sort distance Lemma: Kendall’s tau distance induces a metric on Sn The Kendall’s tau distance is the number of pairs that do not agree in their order  2143  2143  2134  2134  2314

Kendall’s Tau Distance Lemma: Kendall’s tau distance induces a metric on Sn The Kendall’s tau distance is the number of pairs that do not agree in their order For a permutation , Wτ() = {(i,j) | i<j, -1(i) > -1(i) } Lemma: dτ(,π) = |Wτ()∆Wτ(π)| = |Wτ()\Wτ(π)| + |Wτ(π)\Wτ()| dτ(,id) = |Wτ()| The maximum Kendall’s tau distance is n(n-1)/2 The inversion vector of  is x =(x(2),…,x(n)) x(i) = # of elements to the right of i and are less then i dτ(,id) = |Wτ()| = Σ2≤i≤nx(i)

Kendall’s Tau Distance The Kendall’s tau ball: Br() = {π|dτ(,π) ≤ r} The Kendall’s tau sphere: Sr() = {π|dτ(,π) = r} They do not depend on the center  |B1()| = n, |S1()| = n-1

How to Construct ECCs for the Kendall’s Tau Distance? Goal: Construct codes with some prescribed minimum Kendall’s tau distance d

ECCs for Kendall’s Tau Distance Goal: Construct codes correcting a single error Assume k or k+1 is prime Encode a permutation in Sk to a permutation in Sk+2 A code over Sk+2 with k! codewords s=(s1,…sk) ϵ Sk is the information permutation set the locations of k+1ϵ Zk+1 and k+2ϵ Zk+2 to be loc(k+1) = Σ1k(2i-1)si (mod m) loc(k+2) = Σ1k(2i-1)2si(mod m) m=k if k is prime and m=k+1 is k+1 is prime Ex: k=7, s=(7613245) loc(8) = 17+36+51+73+92+114+135 = 3 (mod 7) loc(9) = 127+326+521+723+922+1124+1325 = 2 (mod 7) E(s) = (769183245)

ECCs for Kendall’s Tau Distance A code over Sk+2 with k! codewords s=(s1,…sk) ϵ Sk is the information permutation set the locations of k+1ϵ Zk+1 and k+2ϵ Zk+2 to be loc(k+1) = Σ1k(2i-1)si (mod m) loc(k+2) = Σ1k(2i-1)2si(mod m) m=k if k is prime and m=k+1 is k+1 is prime Ex: k=3, m=3 123 => 15423 ; loc(4)=11+32+53=1(mod3), loc(5)=11+92+253=1(mod3) 132 => 13542 ; loc(4)=11+33+52=2(mod3), loc(5)=11+93+252=2(mod3) 213 => 21543 ; loc(4)=12+31+53=2(mod3), loc(5)=12+91+253=2(mod3) 231 => 52431 ; loc(4)=12+33+51=1(mod3), loc(5)=12+93+251=0(mod3) 312 => 34512 ; loc(4)=13+31+52=1(mod3), loc(5)=13+91+252=2(mod3) 321 => 35241 ; loc(4)=13+32+51=2(mod3), loc(5)=13+92+251=1(mod3)

ECCs for Kendall’s Tau Distance A code over Sk+2 with k! codewords s=(s1,…sk) ϵ Sk is the information permutation set the locations of k+1ϵ Zk+1 and k+2ϵ Zk+2 to be loc(k+1) = Σ1k(2i-1)si (mod m); loc(k+2) = Σ1k(2i-1)2si(mod m) Theorem: This code can correct a single Kendall’s tau error. Proof: Enough to show that the Kendall’s tau distance between every two codewords is at least 3 s=(s1,…sk) ϵ Sk, u=E(s); t=(t1,…tk) ϵ Sk, v=E(t) If dτ(s,t)≥3 then dτ(u,v)≥3 ✔ If dτ(s,t)=1, write t=(s1,…si+1,si…sk), let δ = si+1-si locs(k+1)-loct(k+1)=(2i-1)si+(2i+1)si+1–(2i-1)si+1-(2i+1)si=2si+1-2si =2δ(mod k) thus, they are not positioned in the same location. ✔ locs(k+2)-loct(k+2)=(2i-1)2si+(2i+1)2si+1–(2i-1)2si+1-(2i+1)2si =8isi+1-8isi =8iδ(mod k) thus, they are not positioned in the same location either ✔.

ECCs for Kendall’s Tau Distance Proof (cont): s=(s1,…sk) ϵ Sk, u=E(s); t=(t1,…tk) ϵ Sk, v=E(t) If dτ(s,t)=1, write t=(s1,…si+1,si…sk), let δ = si+1-si locs(k+1)-loct(k+1)=(2i-1)si+(2i+1)si+1–(2i-1)si+1-(2i+1)si=2si+1-2si =2δ(mod k) thus, they are not positioned in the same location. ✔ locs(k+2)-loct(k+2)=(2i-1)2si+(2i+1)2si+1–(2i-1)2si+1-(2i+1)2si=8isi+1-8isi =8iδ(mod k) thus, they are not positioned in the same location either ✔. If dτ(s,t)=2, write t=(s1,…,si+1,si,…,si+1,si,…,sk), δ1=si+1-si, δ2=sj+1-sj locs(k+1)-loct(k+1)=2(δ1+δ2)(mod k) locs(k+2)-loct(k+2)=8iδ1 +8jδ2 (mod k) If δ1+δ2 is NOT a multiple of k then k+1 appears in different locations ✔ Otherwise, δ2=-δ1 (mod k) and locs(k+2)-loct(k+2)=8(i–j)δ1 (mod k), so k+2 appears in different locations ✔ If dτ(s,t)=2, write t=(s1,…,si+2,si,si+1,…,sk) locs(k+1)-loct(k+1)=(2i-1)si+(2i+1)si+1+(2i+3)si+2-(2i-1)si+2-(2i+1)si-(2i+3)si+1) =4si+2 -2si+1-2si (mod k) locs(k+2)-loct(k+2)=8((2i+1)si+2-(i+1)si+1-isi)(mod k)