Qiwen Wang, Sidharth Jaggi, Shuo-Yen Robert Li Institute of Network Coding (INC) The Chinese University of Hong Kong October 19, 2011 IEEE Information.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
The Capacity of Wireless Networks
Cyclic Code.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
15-853:Algorithms in the Real World
Chapter 10 Shannon’s Theorem. Shannon’s Theorems First theorem:H(S) ≤ L n (S n )/n < H(S) + 1/n where L n is the length of a certain code. Second theorem:
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
(0,1)-Matrices If the row-sums of a matrix A are r 1, …,r k, then we shall call the vector r:=(r 1,r 2, …,r k ) the row-sum of A, and similarly for the.
Reliable Deniable Communication: Hiding Messages in Noise Mayank Bakshi Mahdi Jafari Siavoshani ME Sidharth Jaggi The Chinese University of Hong Kong The.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Resilient Network Coding in the presence of Byzantine Adversaries Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth.
Network Coding and Reliable Communications Group Network Coding for Multi-Resolution Multicast March 17, 2010 MinJi Kim, Daniel Lucani, Xiaomeng (Shirley)
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Network Coding and Reliable Communications Group Algebraic Network Coding Approach to Deterministic Wireless Relay Networks MinJi Kim, Muriel Médard.
Dept. of Computer Science Distributed Computing Group Asymptotically Optimal Mobile Ad-Hoc Routing Fabian Kuhn Roger Wattenhofer Aaron Zollinger.
Page 1 Page 1 Network Coding Theory: Tutorial Presented by Avishek Nag Networks Research Lab UC Davis.
Tracey Ho Sidharth Jaggi Tsinghua University Hongyi Yao California Institute of Technology Theodoros Dikaliotis California Institute of Technology Chinese.
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Optimal Multicast Algorithms Sidharth Jaggi Michelle Effros Philip A. Chou Kamal Jain.
Low Complexity Algebraic Multicast Network Codes Sidharth “Sid” Jaggi Philip Chou Kamal Jain.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 13 June 22, 2005
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Network Coding vs. Erasure Coding: Reliable Multicast in MANETs Atsushi Fujimura*, Soon Y. Oh, and Mario Gerla *NEC Corporation University of California,
Analysis of Iterative Decoding
The Hat Game 11/19/04 James Fiedler. References Hendrik W. Lenstra, Jr. and Gadiel Seroussi, On Hats and Other Covers, preprint, 2002,
Asymptotics Data Structures & OO Development I 1 Computer Science Dept Va Tech June 2006 ©2006 McQuain & Ribbens Big-O Analysis Order of magnitude analysis.
The Multiplicative Weights Update Method Based on Arora, Hazan & Kale (2005) Mashor Housh Oded Cats Advanced simulation methods Prof. Rubinstein.
On Multiplicative Matrix Channels over Finite Chain Rings Roberto W. Nobrega, Chen Feng, Danilo Silva, Bartolomeu F. Uchoa-Filho Conference version: NetCod.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Wireless Communication Technologies 1 Outline Introduction OFDM Basics Performance sensitivity for imperfect circuit Timing and.
Uncorrectable Errors of Weight Half the Minimum Distance for Binary Linear Codes Kenji Yasunaga * Toru Fujiwara + * Kwansei Gakuin University, Japan +
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Fields: Defns “Closed”: a,b in F  a+b, a.b in F Properties: – Commutative: a+b=b+a, a.b=b.a – Associative: a+(b+c)=(a+b)+c, a.(b.c) = (a.b).c – Distributive:
Cooperative Recovery of Distributed Storage Systems from Multiple Losses with Network Coding Yuchong Hu, Yinlong Xu, Xiaozhao Wang, Cheng Zhan and Pei.
Erasure Coding for Real-Time Streaming Derek Leong and Tracey Ho California Institute of Technology Pasadena, California, USA ISIT
§6 Linear Codes § 6.1 Classification of error control system § 6.2 Channel coding conception § 6.3 The generator and parity-check matrices § 6.5 Hamming.
Hamming codes. Golay codes.
University of Massachusetts Amherst · Department of Computer Science Square Root Law for Communication with Low Probability of Detection on AWGN Channels.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
Network RS Codes for Efficient Network Adversary Localization Sidharth Jaggi Minghua Chen Hongyi Yao.
International Iran conference on Quantum Information September 2007, Kish Island Evaluation of bounds of codes defined over hexagonal and honeycomb lattices.
Reliable Deniable Communication: Hiding Messages in Noise The Chinese University of Hong Kong The Institute of Network Coding Pak Hou Che Mayank Bakshi.
Reliable Deniable Communication: Hiding Messages from Noise Pak Hou Che Joint Work with Sidharth Jaggi, Mayank Bakshi and Madhi Jafari Siavoshani Institute.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
ON AVCS WITH QUADRATIC CONSTRAINTS Farzin Haddadpour Joint work with Madhi Jafari Siavoshani, Mayank Bakshi and Sidharth Jaggi Sharif University of Technology,
Secure Error-Correcting (SEC) Network Codes Raymond W. Yeung Institute of Network Coding & Department of Information Engineering The Chinese University.
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
Joint Decoding on the OR Channel Communication System Laboratory UCLA Graduate School of Engineering - Electrical Engineering Program Communication Systems.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Codes for Symbol-Pair Read Channels Yuval Cassuto EPFL – ALGO Formerly: Hitachi GST Research November 3, 2010 IPG Seminar.
RELIABLE COMMUNICATION
IERG6120 Lecture 22 Kenneth Shum Dec 2016.
New Characterizations in Turnstile Streams with Applications
Reliable Deniable Communication: Hiding Messages in Noise
Combinatorial Spectral Theory of Nonnegative Matrices
Distributed Network Codes
Amplify-and-Forward Schemes for Wireless Communications
RS – Reed Solomon List Decoding.
Cyclic Code.
Systems of distinct representations
Flow Feasibility Problems
Watermarking with Side Information
Chapter 2. Simplex method
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

Qiwen Wang, Sidharth Jaggi, Shuo-Yen Robert Li Institute of Network Coding (INC) The Chinese University of Hong Kong October 19, 2011 IEEE Information Theory Workshop 2011, Paraty, Brazil The Chinese University of Hong Kong Institute of Network Coding 1

The Chinese University of Hong Kong Institute of Network Coding 2 4 Motivation Model Main Results Discussion Conclusion

The Chinese University of Hong Kong Institute of Network Coding 3 varying noise level 1 (p-ε,p+ε)

The Chinese University of Hong Kong Institute of Network Coding errors propagate through mix-and-forward error varying noise level

The Chinese University of Hong Kong Institute of Network Coding 5 coding kernels unknown a priori [f 1,3,f 1,5 ] [f 2,4,f 2,7 ] [f 3,6,f 4,6 ] [f 6,8,f 6,9 ] varying noise level errors propagate through mix-and-forward

The Chinese University of Hong Kong Institute of Network Coding 6 Alice Bob Mincut = C … …

The Chinese University of Hong Kong Institute of Network Coding 7 Alice Bob C n …… Mincut = C … …

The Chinese University of Hong Kong Institute of Network Coding 8 Alice Bob Cm x n X …… Cm x n Y …… α β … …

The Chinese University of Hong Kong Institute of Network Coding 9 S1S1 S2S2 ……SnSn S 11 S 12 …S 1m S 21 S 22 …S 2m ……S n1 S n2 …S nm mn bits S 11 S 12. S 1m S 21 S 22. S 2m …… S n1 S n2. S nm mx n binary matrix n symbols over One Packet:

The Chinese University of Hong Kong Institute of Network Coding 10 T a symbol over T 11 T 12 …… T 1m T 21 T 22 …… T 2m. T m1 T m2 …… T mm m x m binary matrix TS Multiplication over T 11 T 12 …… T 1m T 21 T 22 …… T 2m. T m1 T m2 …… T mm S 11 S 21. S m1 Multiplication over binary field

The Chinese University of Hong Kong Institute of Network Coding 11 Noiseless Network XY Cm x nCm x CmCm x n × … …

The Chinese University of Hong Kong Institute of Network Coding …… …… Link A …… …… Link B …… …… Link A …… …… Link B OR Errors can be arbitrarily distributed, with an upper bound of fraction p. Worst possible damage can happen to received packets.

The Chinese University of Hong Kong Institute of Network Coding 13 Z …… Em x n Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributed E: num of edges in the network

The Chinese University of Hong Kong Institute of Network Coding 14 Z …… Em x n Error bits on the 1 st edge …… …… Edge 1 Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributed E: num of edges in the network

The Chinese University of Hong Kong Institute of Network Coding Z …… Em x n Error bits on the 1 st edge …… …… Edge 1 Worst-case bit-flip error matrix Z: no more than pEmn 1s, arbitrarily distributed E: num of edges in the network

The Chinese University of Hong Kong Institute of Network Coding 16 Z … X Y Cm x nCm x CmCm x Em × × Em x n Cm x n … …

The Chinese University of Hong Kong Institute of Network Coding 17 × × ×

The Chinese University of Hong Kong Institute of Network Coding 18 XiXi YiYi + d i columns = … d i is the minimum number of columns of that need to be added to TX(i) to obtain Y(i). Claim: is a distance metric.

The Chinese University of Hong Kong Institute of Network Coding 19 For all p less than C/(2Em), an upper bound on the achievable rate of any code over the worst-case binary-error network is Theorem 1

The Chinese University of Hong Kong Institute of Network Coding 20 Total number of Cm x n binary matrices (volume of the big square) is. Lower bound of the volume of the balls Consider those Z’s where every column has pEm ones in it, distinct Z results in distinct. The number of distinct is at least ~ Upper bound on the size of any codebook is Asymptotically in n, the Hamming-type upper bound is. Proof (sketch) pEmn

The Chinese University of Hong Kong Institute of Network Coding 21 Coherent NC: receiver knows the internal coding coefficients, hence knows T and. Non-coherent NC: coding coefficients, hence T and, are unknown in advance, more realistic setting. However, the random linear coding coefficients are usually chosen on the fly.

The Chinese University of Hong Kong Institute of Network Coding 22 Coherent NC: receiver knows the internal coding coefficients, hence knows T and. Non-coherent NC: coding coefficients, hence T and, are unknown in advance, more realistic setting. However, the random linear network coding coefficients are usually chosen on the fly.

The Chinese University of Hong Kong Institute of Network Coding 23 Coherent NC: receiver knows the internal coding coefficients, hence knows T and. Non-coherent NC: coding coefficients, hence T and, are unknown in advance, more realistic setting. However, the random linear network coding coefficients are usually chosen on the fly.

The Chinese University of Hong Kong Institute of Network Coding 24 Coherent NC: receiver knows the internal coding coefficients, hence knows T and. Non-coherent NC: coding coefficients, hence T and, are unknown in advance, more realistic setting. However, the random linear network coding coefficients are usually chosen on the fly.

The Chinese University of Hong Kong Institute of Network Coding 25 Coherent GV-type network codes achieve a rate of at least Theorem 2 Non-coherent GV-type network codes achieve a rate of at least Theorem 3

The Chinese University of Hong Kong Institute of Network Coding 26 Need an upper bound on volume of instead of the lower bound on volume of as in Thm1. (sphere packing vs. covering) Different Y, or equivalently, can be bounded above by the number of different Z, which equals The summation can be bounded from above by ~ Lower bound on the size of the codebook Asymptotically in n, the rate of coherent GV-type NC. Proof of Thm2 (sketch) TX(1) 2pEmn TX(2) 2pEmn TX(3) 2pEmn

The Chinese University of Hong Kong Institute of Network Coding 27 Crucial difference with the proof of Thm2: the process of choosing codewords. Consider all possible values of, at most (and hence T, since it comprises of a specific subset of C columns of ). The number of potential codewords that can be chosen in the codebook is at least which equals Asymptotically in n, it leads to the same rate of as coherent NC in Theorem2. Proof of Thm3 (sketch)

The Chinese University of Hong Kong Institute of Network Coding 28 For all p less than,the Hamming-type and GV-type bounds hold. Claim Theorem 1 (Hamming-type upper bound)requires that. For the GV-type bound in Thm2 and Thm3 to give non-negative rates,. When p is small, Proof

The Chinese University of Hong Kong Institute of Network Coding 29 Coherent/non-coherent GV-type lower bound: GV-type codes: End-to-end nature Complexity: poly. in block length Hamming-type upper bound: Worst-case bit-flip error model

The Chinese University of Hong Kong Institute of Network Coding 30

The Chinese University of Hong Kong Institute of Network Coding 31