Some new(ish) results in network information theory Muriel Médard RLE EECS MIT Newton Institute Workshop January 2010.

Slides:



Advertisements
Similar presentations
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Advertisements

An Easy-to-Decode Network Coding Scheme for Wireless Broadcasting
1 Index Coding Part II of tutorial NetCod 2013 Michael Langberg Open University of Israel Caltech (sabbatical)
Relaying in networks with multiple sources has new aspects: 1. Relaying messages to one destination increases interference to others 2. Relays can jointly.
Information Theory EE322 Al-Sanie.
Enhancing Secrecy With Channel Knowledge
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
1 Network Coding: Theory and Practice Apirath Limmanee Jacobs University.
1 Cooperative Communications in Networks: Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland October, 2008.
Network Coding for Large Scale Content Distribution Christos Gkantsidis Georgia Institute of Technology Pablo Rodriguez Microsoft Research IEEE INFOCOM.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Resilient Network Coding in the presence of Byzantine Adversaries Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth.
Network Coding and Reliable Communications Group Network Coding for Multi-Resolution Multicast March 17, 2010 MinJi Kim, Daniel Lucani, Xiaomeng (Shirley)
Network Coding Theory: Consolidation and Extensions Raymond Yeung Joint work with Bob Li, Ning Cai and Zhen Zhan.
Network Coding Project presentation Communication Theory 16:332:545 Amith Vikram Atin Kumar Jasvinder Singh Vinoo Ganesan.
1 Network Source Coding Lee Center Workshop 2006 Wei-Hsin Gu (EE, with Prof. Effros)
1 NETWORK CODING Anthony Ephremides University of Maryland - A NEW PARADIGM FOR NETWORKING - February 29, 2008 University of Minnesota.
1 Simple Network Codes for Instantaneous Recovery from Edge Failures in Unicast Connections Salim Yaacoub El Rouayheb, Alex Sprintson Costas Georghiades.
Network Coding and Reliable Communications Group Algebraic Network Coding Approach to Deterministic Wireless Relay Networks MinJi Kim, Muriel Médard.
10th Canadian Workshop on Information Theory June 7, 2007 Rank-Metric Codes for Priority Encoding Transmission with Network Coding Danilo Silva and Frank.
Page 1 Page 1 Network Coding Theory: Tutorial Presented by Avishek Nag Networks Research Lab UC Davis.
ECE 776 Information Theory Capacity of Fading Channels with Channel Side Information Andrea J. Goldsmith and Pravin P. Varaiya, Professor Name: Dr. Osvaldo.
Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland Joint work with Randy Cogill, University of Virginia.
How to Turn on The Coding in MANETs Chris Ng, Minkyu Kim, Muriel Medard, Wonsik Kim, Una-May O’Reilly, Varun Aggarwal, Chang Wook Ahn, Michelle Effros.
Linear Codes for Distributed Source Coding: Reconstruction of a Function of the Sources -D. Krithivasan and S. Sandeep Pradhan -University of Michigan,
Noise, Information Theory, and Entropy
Network Coding vs. Erasure Coding: Reliable Multicast in MANETs Atsushi Fujimura*, Soon Y. Oh, and Mario Gerla *NEC Corporation University of California,
Network Alignment: Treating Networks as Wireless Interference Channel Chun Meng Univ. of California, Irvine.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
The Complex Braid of Communication and Control Massimo Franceschetti.
Some interesting directions in network coding Muriel Médard Electrical Engineering and Computer Science Department Massachusetts Institute of Technology.
Network Coding and Information Security Raymond W. Yeung The Chinese University of Hong Kong Joint work with Ning Cai, Xidian University.
When rate of interferer’s codebook small Does not place burden for destination to decode interference When rate of interferer’s codebook large Treating.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Thrust 2 Layerless Dynamic Networks Lizhong Zheng, Todd Coleman.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
NETWORK CODING. Routing is concerned with establishing end to end paths between sources and sinks of information. In existing networks each node in a.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
Channel Capacity.
User Cooperation via Rateless Coding Mahyar Shirvanimoghaddam, Yonghui Li, and Branka Vucetic The University of Sydney, Australia IEEE GLOBECOM 2012 &
Threshold Phenomena and Fountain Codes Amin Shokrollahi EPFL Joint work with M. Luby, R. Karp, O. Etesami.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
EE 6332, Spring, 2014 Wireless Communication Zhu Han Department of Electrical and Computer Engineering Class 11 Feb. 19 th, 2014.
Cross-Layer Optimization in Wireless Networks under Different Packet Delay Metrics Chris T. K. Ng, Muriel Medard, Asuman Ozdaglar Massachusetts Institute.
Erasure Coding for Real-Time Streaming Derek Leong and Tracey Ho California Institute of Technology Pasadena, California, USA ISIT
Cooperative Communication in Sensor Networks: Relay Channels with Correlated Sources Brian Smith and Sriram Vishwanath University of Texas at Austin October.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Thrust 2 Overview: Layerless Dynamic Networks Lizhong Zheng.
Superposition encoding A distorted version of is is encoded into the inner codebook Receiver 2 decodes using received signal and its side information Decoding.
On Coding for Real-Time Streaming under Packet Erasures Derek Leong *#, Asma Qureshi *, and Tracey Ho * * California Institute of Technology, Pasadena,
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
1 The Encoding Complexity of Network Coding Michael Langberg California Institute of Technology Joint work with Jehoshua Bruck and Alex Sprintson.
The High, the Low and the Ugly Muriel Médard. Collaborators Nadia Fawaz, Andrea Goldsmith, Minji Kim, Ivana Maric 2.
Information Theory for Mobile Ad-Hoc Networks (ITMANET) Thrust I Michelle Effros, Ralf Koetter, & Muriel Médard (and everyone!)
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Throughput-Smoothness Trade-offs in Streaming Communication Gauri Joshi (MIT) Yuval Kochman (HUJI) Gregory Wornell (MIT) 1 13 th Oct 2015 Banff Workshop.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Thrusts 0 and 1 Metrics and Upper Bounds Muriel Medard, Michelle Effros and.
Joint Moments and Joint Characteristic Functions.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
Key insight.  With drop-when-decoded, the busy period of the virtual queue contributes to the physical queue size calculation  Responding to ACK of the.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Collision Helps! Algebraic Collision Recovery for Wireless Erasure Networks.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
March 18, 2005 Network Coding in Interference Networks Brian Smith and Sriram Vishwanath University of Texas at Austin March 18 th, 2005 Conference on.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
Ivana Marić, Ron Dabora and Andrea Goldsmith
Wireless Network Coding: Some Lessons Learned in ITMANET
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Collision Helps! Algebraic Collision Recovery for Wireless Erasure Networks.
Presentation transcript:

Some new(ish) results in network information theory Muriel Médard RLE EECS MIT Newton Institute Workshop January 2010

Collaborators MIT: David Karger, Anna Lee Technical University of Munich: Ralf Koetter † (previously UIUC) California Institute of Technology: Michelle Effros, Tracey Ho (previously MIT, UIUC, Lucent) Stanford: Andrea Goldsmith, Ivana Maric University of South Australia: Desmond Lun (previously MIT, UIUC)

Overview New results in network information theory have generally centered around min-cut max-flow theorems because of multicast –Network coding Algebraic model Random codes Correlated sources Erasures Errors –High SNR networks What happens when we do not have multicast –Problem of non-multicast network coding –Equivalence

[KM01, 02, 03] Starting without noise

The meaning of connection

The basic problem

Linear codes

Linear network system

Transfer matrix d

Linear network system

An algebraic flow theorem

Multicast

Multicast theorem We recover the min-cut max-flow theorem of [ACLY00]

One source, disjoint connections plus multicasts

Avoiding roots The effect of the network is that of a transfer matrix from sources to receivers To recover symbols at the receivers, we require sufficient degrees of freedom – an invertible matrix in the coefficients of all nodes We just need to avoid roots in each of the submatrices corresponding to the individual receivers The realization of the determinant of the matrix will be non-zero with high probability if the coefficients are chosen independently and randomly over a large enough field - similar to Schwartz-Zippel bound [HKMKE03, HMSEK03] Random and distributed choice of coefficients, followed by matrix inversion at the receiver Is independence crucial?

Connection to Slepian-Wolf Also a question of min-cut in terms of entropies for some senders and a receiver Here again, random approaches optimal Different approaches: random linear codes, binning, etc…

Extending to a whole network Use distributed random linear codes Redundancy is removed or added in different parts of the network depending on available capacity No knowledge of source entropies at interior network nodes For the special case of a a single source and depth one the network coding error exponents reduce to known error exponents for Slepian-Wolf coding [Csi82]

Proof outline Sets of joint types Sequences of each type

Proof outline Cardinality bounds Probability of source vectors of certain types Extra step to bound probability of being mapped to a zero at any node Why is this useful?

Joint source network coding Sum of rates on virtual links > R  joint solution = R  separate solution (R = 3) Joint (cost 9) Separate (cost 10.5) [Lee et al 07] (physical, for receiver 1, for receiver 2)

What next? Random coding approaches work well for arbitrary networks with multicast Decoding is easy if we have independent or linearly dependent sources, may be difficult if we have arbitrary correlation –need minimum entropy decoding –can add some structure –can add some error [CME08] What happens when the channels are not perfect?

Erasure reliability – single flow End-to-end erasure coding: Capacity is packets per unit time. As two separate channels: Capacity is packets per unit time. - Can use block erasure coding on each channel. But delay is a problem. Network coding: minimum cut is capacity - For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [LME04], [LMK05], [DGPHE04] - Nodes store received packets in memory - Random linear combinations of memory contents sent out - Delay expressions generalize Jackson networks to the innovative packets - Can be used in a rateless fashion

Scheme for erasure reliability We have k message packets w 1, w 2,..., w k (fixed- length vectors over F q ) at the source. (Uniformly-)random linear combinations of w 1, w 2,..., w k injected into source’s memory according to process of rate R 0. At every node, (uniformly-)random linear combinations of memory contents sent out; –received packets stored into memory. –in every packet, store length- k vector over F q representing the transformation it is of w 1, w 2,..., w k — global encoding vector sent in the packet overhead, no need for side information about delays.

Outline of proof Keep track of the propagation of innovative packets - packets whose auxiliary encoding vectors (transformation with respect to the n packets injected into the source’s memory) are linearly independent across particular cuts. Can show that, if R 0 less than capacity and input process is Poisson, then propagation of innovative packets through any node forms a stable M/M/1 queueing system in steady-state. –We obtain delay expressions using in effect a generalization of Jackson networks for innovative packets We may also obtain error exponents over the network Erasures can be handled readily and we have fairly good characterization of the network-wide behavior

The case with errors There is separation between network coding and channel coding when the links are point- to-point [SYC06], [Bor02] Both papers address the multicast, in which case the tightness of the min-cut max-flow bounds can be exploited and random approaches work The deleterious effects of the channel are not crucial for capacity results but we may not have a handle on error exponents and other detailed aspects of behavior Is the point-to-point assumption crucial?

Model as additive channels over a finite field What happens when we do not have point-to-point links? Let us consider the case where we have interference as the dominating factor (high SNR) Recently considered in the high gain regime [ADT08, 09] In the high SNR regime (different from high gain in networks because of noise amplification), analog network coding is near optimal [MGM10] In all of these cases, min-cut max-flow holds and random arguments also hold Model as error free links Multiple access region

Beyond multicast Solutions are no longer easy, even in the scalar linear case Linearity may not suffice [DFZ04], examples from matroids It no longer suffices to avoid roots Many problems regarding solvability remain open – achievability approaches will not work

Linear general case

The non-multicast case Even for linear systems, we do not know how to perform network coding over error-free networks Random schemes will not work What can we say about systems with errors for point-to-point links? The best we can hope for is for an equivalence between error-free systems and systems with errors –Random schemes handle errors locally –The global problem of non-multicast network coding is inherently combinatorial and difficult

All channels have capacity 1 bit/unit time Are these two networks essentially the same? Intuitively, since the “noise” is uncorrelated to any other random variable it cannot help..... Beyond multicast

Decoding is restrictive The characteristic of a noisy link vs a capacitated bit-pipe A noisy channel allows for a larger set of strategies than a pipe

Dual to Shannon Theory: By emulating noisy channels as noiseless channels with same link capacity, can apply existing tools for noiseless channels (e.g. network coding) to obtain new results for networks with noisy links. This provides a new method for finding network capacity + XY N C=I(X;Y) X Throughput C Shannon Theory Dual to Shannon Theory Equivalence

Network equivalence We consider a factorization of networks into hyperedges We do not provide solutions to networks since we are dealing with inherently combinatorial and intractable problems We instead consider mimicking behavior of noisy links over error- free links We can generate equivalence classes among networks, subsuming all possible coding schemes [KEM09] –R noiseless  R noisy easy since the maximum rate on the noiseless channels equals the capacity of the noisy links: can transmit at same rates on both. –R noisy  R noiseless hard since must show the capacity region is not increased by transmitting over links at rates above the noisy link capacity. We prove this using theory of types Metrics other than capacity may not be the same for both networks (e.g. error exponents) particularly because of feedback For links other than point-to-point, we can generate bounds These may be distortion-based rather than capacity-based, unlike the small examples we have customarily considered

Conclusions Random approaches seem to hold us in good stead for multicast systems under a variety of settings (general networks, correlated sources, erasures, errors, interference…) For more general network connections, the difficulty arises even in the absence of channel complications - we may have equivalences between networks Can we combine the two to figure out how to break up networks in a reasonable way to allow useful modeling?