Information theory Multi-user information theory Part 7: A special matrix application A.J. Han Vinck Essen, 2002.

Slides:



Advertisements
Similar presentations
1 A triple erasure Reed-Solomon code, and fast rebuilding Mark Manasse, Chandu Thekkath Microsoft Research - Silicon Valley Alice Silverberg Ohio State.
Advertisements

General Linear Model With correlated error terms  =  2 V ≠  2 I.
Information theory Multi-user information theory A.J. Han Vinck Essen, 2004.
Information and Coding Theory
Information Theory EE322 Al-Sanie.
Source Coding Data Compression A.J. Han Vinck. DATA COMPRESSION NO LOSS of information and exact reproduction (low compression ratio 1:4) general problem.
Matrices & Systems of Linear Equations
Matrices. Special Matrices Matrix Addition and Subtraction Example.
Coding for Flash Memories
Institute for Experimental Mathematics Ellernstrasse Essen - Germany Coding for a Terrible Channel A.J. Han Vinck July 3, 2005 COST 289.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
Matrices and Determinants
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Table of Contents Matrices - Inverse of a 2  2 Matrix To find the inverse of a 2  2 matrix, use the following pattern. Let matrix A be given by... Then.
Table of Contents Matrices - Inverse Matrix Definition The inverse of matrix A is a matrix A -1 such that... and Note that... 1) For A to have an inverse,
Syndrome Decoding of Linear Block Code
Information Theory & Coding…
Linear Codes.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
USING THE MATLAB COMMUNICATIONS TOOLBOX TO LOOK AT CYCLIC CODING Wm. Hugh Blanton East Tennessee State University
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION 2-dimensional transmission A.J. Han Vinck May 1, 2003.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany Data communication line codes and constrained sequences A.J. Han Vinck Revised.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck January 2011.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany On STORAGE Systems A.J. Han Vinck June 2004.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
13.1 Matrices and Their Sums
Great Theoretical Ideas in Computer Science.
Channel Capacity.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Introduction to Data Communication: the discrete channel model A.J. Han Vinck University of Essen April 2005.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany Data communication signatures A.J. Han Vinck July 29, 2004.
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Linear Block Code 指導教授:黃文傑 博士 學生:吳濟廷
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Basic Concepts of Encoding Codes and Error Correction 1.
Some Computation Problems in Coding Theory
Warm Up Perform the indicated operations. If the matrix does not exist, write impossible
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
INFORMATION THEORY Pui-chor Wong.
ECE 101 An Introduction to Information Technology Information Coding.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Channel Coding: Part I Presentation II Irvanda Kurniadi V. ( ) Digital Communication 1.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
UNIT –V INFORMATION THEORY EC6402 : Communication TheoryIV Semester - ECE Prepared by: S.P.SIVAGNANA SUBRAMANIAN, Assistant Professor, Dept. of ECE, Sri.
The Euclidean Algorithm
Matrices and Vector Concepts
Introduction to Information theory
Elementary Matrices.
Coding Theory Dan Siewiorek June 2012.
Matrix Operations SpringSemester 2017.
Applied Discrete Mathematics Week 4: Number Theory
Block codes. encodes each message individually into a codeword n is fixed, Input/out belong to alphabet Q of cardinality q. The set of Q-ary n-tuples.
CSCI N207 Data Analysis Using Spreadsheet
Information Redundancy Fault Tolerant Computing
Matrix Definitions It is assumed you are already familiar with the terms matrix, matrix transpose, vector, row vector, column vector, unit vector, zero.
Erasure Correcting Codes for Highly Available Storage
3.IV. Change of Basis 3.IV.1. Changing Representations of Vectors
Matrix Operations SpringSemester 2017.
Presentation transcript:

Information theory Multi-user information theory Part 7: A special matrix application A.J. Han Vinck Essen, 2002

content a special rank k x n natrix the application in Broadcast channel Switching channel Coding for memories with defects existance proof

Switching channel X2={0,1} X1={0,1} Y ={ ,0,1}

uniform Definition: of a uniform rank k matrix uniform a binary uniform rank k, k x n matrix U - has rank k - when deleting  (n-k) columns the rank of the remaining matrix may stay = k nknk deleted

Application (1): X2 * U = Y = X1 X2 = ( ) = Y U Y = (... . .. .. ) Result: Y = X2 x U with positions erased (  ) by X1 Sum Rate: ?

Continuation: sum rate ? X2 can be retrieved from the remaining part if rank = k i.e. an inverse exists transmitted k bits X1 specifies ~ 2 nh([ n-k)/n]) = 2 nh(1-k)n) sequences transmitted nh(1-k/n) = nh(k/n) bits

Problem left Find matrix U with maximum number of sequences X1 with remaining rank k matrix Sum Rate: k/n + nh(k/n)

Excercise: Give the matrix U and efficiency for k = 1 k = 2 k = n-1

Existance (1) Ingredients: specify (n-k) erased columns Property: remaining part of G has rank k

Existance (2) Y = # different patterns of (n-k) erased columns X = # of possible rank k matrices for a specific pattern kn-k k y X   t otal number of matrices = 2 kn One matrix must have more than entries

Existance (3) 1. # different patterns of column erasure Y ~ 2.# of invertible k x k matrices F= (2 k –1)(2 k –2)(2 k –2 k-1 ) 3.A specified pattern allows X = 2 (n-k)k F matrices G 4.2 (n-k)k F  c F 2 nk where c F = 0.28

Average # of allowed patterns per matrix Conclusion: there exists at least one (k x n) matrix for which  different patterns of up to (n-k) column erasures leave a matrix of rank k = Existance (4)

Extension Ingredients: specify any k‘ ≤ k columns Property: the specified matrix has rank k‘ Wish: k‘ = k for optimum performance! IkIk

Application (2): the broadcast channel ZXY ZXY Step 1: encode information for y Y has a maximum of k zeros Y =( ) C(y)=(1/2, 0, 1/2, 0, 1/2, 1/2)

Application (2): the broadcast channel K‘-zeros Y = ( ) X=(X 1, X 2,  X n-k )  C(X) = ( ) C(X,Y) = ( ) C(X)  C(X,Y) = ( ) Z = ( )  Property: Z has the same zeros as C(y)

Application (2): the broadcast channel Z = ( )  y = ( )  C(X)  C(X,Y) = ( )  C(X,Y) = ( )  C(X) = ( )

Continuation: Why does it work? U ( ? ? ? ) = C(X,Y)   C(X,Y)  C(X) First k bits of C(X,Y) uniquely determine C(X,Y) Any pattern of k‘ bits can be constructed s.t. C(X)  C(X,Y) has zeros where Y has C(X) = X 1 X 2 X n-k  no influence first k bits

Transmitted information: n-k bits with C(X) n h( k‘/n)= nh((n-k‘)/n)bits with Y Hence: efficiency per transmission (n-k)/n + h((n-k‘)/n)

Memory with defects: Y specifies a vector with k‘  k defects Y = ( **0**0*1**1****1*) C(X) = ( X 1 X 2 X n-k ) Store: C(X)  C(X,Y) matches the defects in Y Read: C(X)  C(X,Y) errorfree and add C(X,Y) to get C(X) Efficiency: = 1 - k/n !