Network Coding Theory: Consolidation and Extensions Raymond Yeung Joint work with Bob Li, Ning Cai and Zhen Zhan.

Slides:



Advertisements
Similar presentations
Quantum t-designs: t-wise independence in the quantum world Andris Ambainis, Joseph Emerson IQC, University of Waterloo.
Advertisements

Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
10.4 Complex Vector Spaces.
Packing Multicast Trees Philip A. Chou Sudipta Sengupta Minghua Chen Jin Li Kobayashi Workshop on Modeling and Analysis of Computer and Communication Systems,
Shortest Vector In A Lattice is NP-Hard to approximate
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
On error and erasure correction coding for networks and deadlines Tracey Ho Caltech NTU, November 2011.
1 Index Coding Part II of tutorial NetCod 2013 Michael Langberg Open University of Israel Caltech (sabbatical)
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Information Theoretical Security and Secure Network Coding NCIS11 Ning Cai May 14, 2011 Xidian University.
1 Network Coding: Theory and Practice Apirath Limmanee Jacobs University.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Resilient Network Coding in the presence of Byzantine Adversaries Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth.
Network Coding Project presentation Communication Theory 16:332:545 Amith Vikram Atin Kumar Jasvinder Singh Vinoo Ganesan.
1 Simple Network Codes for Instantaneous Recovery from Edge Failures in Unicast Connections Salim Yaacoub El Rouayheb, Alex Sprintson Costas Georghiades.
Deterministic Network Coding by Matrix Completion Nick Harvey David Karger Kazuo Murota.
Network Coding and Reliable Communications Group Algebraic Network Coding Approach to Deterministic Wireless Relay Networks MinJi Kim, Muriel Médard.
Page 1 Page 1 Network Coding Theory: Tutorial Presented by Avishek Nag Networks Research Lab UC Davis.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University.
Distributed Combinatorial Optimization
A Graph-based Framework for Transmission of Correlated Sources over Multiuser Channels Suhan Choi May 2006.
Low Complexity Algebraic Multicast Network Codes Sidharth “Sid” Jaggi Philip Chou Kamal Jain.
1 MAC 2103 Module 10 lnner Product Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Define and find the.
Network Alignment: Treating Networks as Wireless Interference Channel Chun Meng Univ. of California, Irvine.
Feng Lu Chuan Heng Foh, Jianfei Cai and Liang- Tien Chia Information Theory, ISIT IEEE International Symposium on LT Codes Decoding: Design.
Networking Seminar Network Information Flow R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung. Network Information Flow. IEEE Transactions on Information.
Connections between Network Coding and Matroid Theory Qifu Sun Institute of Network Coding (Shenzhen), CUHK 30, Aug., 2013.
Repairable Fountain Codes Megasthenis Asteris, Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY /5/221.
Network Coding and Information Security Raymond W. Yeung The Chinese University of Hong Kong Joint work with Ning Cai, Xidian University.
On Multiplicative Matrix Channels over Finite Chain Rings Roberto W. Nobrega, Chen Feng, Danilo Silva, Bartolomeu F. Uchoa-Filho Conference version: NetCod.
Simplex method (algebraic interpretation)
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
NETWORK CODING. Routing is concerned with establishing end to end paths between sources and sinks of information. In existing networks each node in a.
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
Linear Equations in Linear Algebra
1 The edge removal problem Michael Langberg SUNY Buffalo Michelle Effros Caltech.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
Erasure Coding for Real-Time Streaming Derek Leong and Tracey Ho California Institute of Technology Pasadena, California, USA ISIT
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Section 2.3 Properties of Solution Sets
Ahmed Osama Research Assistant. Presentation Outline Winc- Nile University- Privacy Preserving Over Network Coding 2  Introduction  Network coding 
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Network Information Flow Nikhil Bhargava (2004MCS2650) Under the guidance of Prof. S.N Maheshwari (Dept. of Computer Science and Engineering) IIT, Delhi.
The High, the Low and the Ugly Muriel Médard. Collaborators Nadia Fawaz, Andrea Goldsmith, Minji Kim, Ivana Maric 2.
Probability Spaces A probability space is a triple (closed under Sample Space (any nonempty set), Set of Events a sigma-algebra over complementation and.
Secret Sharing Non-Shannon Information Inequalities Presented in: Theory of Cryptography Conference (TCC) 2009 Published in: IEEE Transactions on Information.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Compression for Fixed-Width Memories Ori Rottenstriech, Amit Berman, Yuval Cassuto and Isaac Keslassy Technion, Israel.
Summary of the Last Lecture This is our second lecture. In our first lecture, we discussed The vector spaces briefly and proved some basic inequalities.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Computation of the solutions of nonlinear polynomial systems
Network Coding and its Applications in Communication Networks
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
4.6: Rank.
§1-3 Solution of a Dynamical Equation
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Chapter 6. Large Scale Optimization
Presentation transcript:

Network Coding Theory: Consolidation and Extensions Raymond Yeung Joint work with Bob Li, Ning Cai and Zhen Zhan

Outline Single-Source Network Coding Global and Local Descriptions of a Network Code Linear Multicast, Broadcast, and Dispersion Static codes Multi-Source Network Coding Fundamental Limits of Linear Codes Based on an upcoming paper to appear in Foundation and Trends in Communications and Information Theory (Editor: Sergio Verdu).

Single-Source Network Coding Network is acyclic. The message x, a  -dimensional row vector in F, is generated at the source node. A symbol in F can be sent on each channel.

Global Description The symbol sent on channel e is a function of the message, called the global encoding mapping for channel e. For any node v, the global encoding mappings have to satisfy the local constraints, i.e., the local encoding mapping for every node v is well defined.

A Globally Linear Network Code A code is globally linear if all the global encoding mappings are linear (and all the local constraints satisfied). A globally linear code is the most general linear code that can possibly be defined. The global encoding mapping for channel e is characterized by a column vector f e, s.t. the symbol sent on e is x  f e. It can be proved that if a code is globally linear, then it is also locally linearly, i.e., all local encoding mappings are linear.

Global Description vs Local Description Since the local encoding mapping at a node v is linear, it follows that for any e  Out(v), f e is a linear combination of f e’, e’  In(v).  Global description (Li-Yeung-Cai). These linear combination forms the local encoding kernel.  Local description (Koetter-Medard)

Global Description = Local Description The global description and the local description are the two sides of a coin: They are equivalent. Both can describe the most general form of a (block) linear network code!

Generic Network Code Definition (LYC) A linear network code is said to be generic if: For every set of channels {e 1, e 2, …, e n }, where n   and e j  Out(v j ), the vectors f e1, f e2, …, f en are linearly independent provided that  {f d : d  In(v j )}    {f ek : k  j}  for 1  j  n. The idea: Whenever a collection of vectors can possibly be linear independent, they are.

Special Cases of a Generic Network Code Generic network code  Linear dispersion  Linear Broadcast  Linear Multicast Each notion is strictly weaker than the previous notion!

Linear Multicast For each node v, if maxflow(v)  , then the message x can be recovered.

Linear Broadcast For every node v, If maxflow(v)  , the message x can be received. If maxflow(v) < , maxflow(v) dimensions of the message x can be recovered. Linear Broadcast  Linear Multicast

Linear Dispersion For every collection of nodes P, If maxflow( P )  , the message x can be received. If maxflow( P ) < , maxflow( P ) dimensions of the message x can be recovered. Linear Dispersion  Linear Broadcast  Linear Mulicast (Generic network code implies all) For a linear dispersion, a new comer who wants to receive the message x can do so by accessing a collection of nodes P such that maxflow( P )  , where each individual node u in P may have maxflow(u) < .

Code Constructions A generic network code exists for all sufficiently large F and can be constructed by the LYC algorithm. A linear dispersion, a linear broadcast, and a linear multicast can potentially be constructed with decreasing complexity since they satisfy a set of properties of decreasing strength. In particular, a polynomial time algorithm for constructing a linear multicast has been reported independently by Sanders et al. and Jaggi et al.

Static Codes Static linear multicast was introduced by KM which finds applications in robust network multicast. Static versions of linear broadcast and linear dispersion can be defined accordingly. The LYC algorithm can be modified for constructing a static generic network code. This means that the static versions of a linear dispersion, a linear broadcast, and a linear multicast can all be constructed.

Multi-Source Network Coding A network is given. Independent information sources of rates  = (  1,  2, …,  S ) are generated at possibly different nodes, and each source is to be multicast to a specific sets of nodes. The set of all achievable rates is called the achievable information rate region R. If all the sources are multicast to the same set of nodes, then it reduces to a single-source network coding problem, otherwise it does not.

A multi-source network coding problem cannot be decomposed into single-source network coding problems even when all the information sources are generated at the same node (Yeung 95). Special multi-source network coding problems have been shown to be decomposable (Roche, Hau, Yeung, Zhang 95-99).

An Example of Indecomposability (with Wireless Application) Independent sources need to be coded jointly b1b1 b2b2 b1b1 b2b2 b 1 +b 2 b1b1 b2b2

Characterization of the Information Rate region R Inner and outer bounds on R acyclic networks can be expressed in term of the region of all entropy functions of random variables (Yeung 97, Yeung-Zhang 99, Song et al. 03). A computable outer bound on R, called R LP, has also been obtained. Only existence proofs by random coding are available  no code construction.

The region Γ* Let Γ* be the set of all entropy functions of a collection of random variables labeled by the information sources and the channels.

Outer Bound R out If an information rate tuple  is achievable, then there exists h  closure(Γ*) which satisfies a set of constraints denoted by C which specifies 1. the independence of the information sources 2. the rate tuple 3. local constraints of the code 4. the channel capacity constraints 5. the multicast requirements. C is a collection of hyperplanes in the Eucledian space.

Linear Codes for Multiple Sources The global description for a linear network code can be generalized to multiple sources. Each channel is characterized by a column vector of an appropriate dimension. The existence of a linear code is nothing but the existence of a collections of vectors satisfying the set of constraints C.

The Region  * Let  * be the set of all rank functions for a collection of  -dimensional column vectors labeled by the information sources and the channels over some finite field F, where   1.

Linear Codes vs Nonlinear Codes Linear codes  R linear An information rate tuple  is linearly achievable iff there exists h  closure(  *) which satisfies the set of constraints C. Note: R linear includes all rate tuples that are inferior to some rate tuples achievable by mixing linear codes. Nonlinear codes  outer bound R out If an information rate tuple  is achievable, then there exists h  closure(Γ*) which satisfies the set of constraints C.

Similarity between Rank and Entropy The rank function satisfies 1. 0  rank(A). 2. rank(A)  rank(B) if A  B. 3. rank(A) + rank(B)  rank(A  B) + rank(A  B). 4. rank(A)  |A|. The entropy function in general satisfies 1. 0  H(A). 2. H(A)  H (B) if A  B. 3. H(A) + H (B)  H (A  B) + H (A  B) are called the polymatroidal axioms.

The Bridge from Rank to Entropy Theorem 1: Let F be a finite field, Y be an  -dimensional random row vector that distributes uniformly on F , and A be an   l matrix. Let Z = Y·A. Then H(Z) = rank(A) log |F|. Using this theorem, it can be shown that  *  Γ*.

A Gap between  * and Γ* In addition to the polymatroidal axioms, the rank function also satisfies the Ingleton inequality: r(A 13 )+ r(A 14 )+ r(A 23 )+ r(A 24 )+ r(A 34 )  r(A 3 )+ r(A 4 )+ r(A 12 )+ r(A 134 )+ r(A 234 ) The Ingleton inequality is satisfied by algebraic structures as general as Abelian groups. The corresponding inequality is not satisfied by the entropy function (Zhang-Yeung 99), so there is a gap between  * and Γ*. This gap between  * and Γ* suggests that nonlinear codes may actually perform better for some multi- source problems.

Vector Linear Codes Vector Linear Codes (Riis, Lehman 2, Medard, Effros, Ho, Karger, Koetter) It can be regarded as a linear code over a network obtained by expanding all the capacities by an integer factor. It has been shown that some multi-source problems do not have linear solutions but have vector linear solutions. Question 1: Are these vector linear solutions better than all mixtures of linear solutions? Question 2: Do these vector linear solutions exceed the Ingleton inequality? (If so, the answer to Q1 is yes.)

Codes Beyond Fields Dougherty, Frieling and Zeger have recently shown that there exist a multi-source problem that has no linear solution even in the more general algebraic context of modules, which includes all finite rings and Abelian groups. Question 1: Is the nonlinear solution given by DFZ better than all mixtures of linear solutions? Question 2: Does the nonlinear solution given by DFZ exceed the Ingleton inequality? (If so, the answer to Q1 is yes.)

Ingleton Inequality Classification Codes abide by the Ingleton inequality Linear codes, module codes Codes not necessarily abide by the Ingleton inequality Vector linear codes (abide by the Ingleton inequality in an extended space) Codes not abide by the Ingleton inequality Non-Abelian group codes are asymptotically as good as all nonlinear codes (Chan, submitted to ISIT 2005).

Thank You