1 A Reductionist view of Network Information Theory Michael Langberg SUNY Buffalo.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
NP-Hard Nattee Niparnan.
1 Index Coding Part II of tutorial NetCod 2013 Michael Langberg Open University of Israel Caltech (sabbatical)
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Information Theoretical Security and Secure Network Coding NCIS11 Ning Cai May 14, 2011 Xidian University.
Improving the Round Complexity of VSS in Point-to-Point Networks Jonathan Katz (University of Maryland) Chiu-Yuen Koo (Google Labs) Ranjit Kumaresan (University.
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
Complexity 11-1 Complexity Andrei Bulatov NP-Completeness.
1 Network Coding: Theory and Practice Apirath Limmanee Jacobs University.
Computability and Complexity 15-1 Computability and Complexity Andrei Bulatov NP-Completeness.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Resilient Network Coding in the presence of Byzantine Adversaries Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth.
Network Coding Theory: Consolidation and Extensions Raymond Yeung Joint work with Bob Li, Ning Cai and Zhen Zhan.
1 Simple Network Codes for Instantaneous Recovery from Edge Failures in Unicast Connections Salim Yaacoub El Rouayheb, Alex Sprintson Costas Georghiades.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 23 Instructor: Paul Beame.
Analysis of Algorithms CS 477/677
Network Coding and Reliable Communications Group Algebraic Network Coding Approach to Deterministic Wireless Relay Networks MinJi Kim, Muriel Médard.
Page 1 Page 1 Network Coding Theory: Tutorial Presented by Avishek Nag Networks Research Lab UC Davis.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Toward NP-Completeness: Introduction Almost all the algorithms we studies so far were bounded by some polynomial in the size of the input, so we call them.
Variable-Length Codes: Huffman Codes
Noise, Information Theory, and Entropy
Network Alignment: Treating Networks as Wireless Interference Channel Chun Meng Univ. of California, Irvine.
1 CIS 5371 Cryptography 3. Private-Key Encryption and Pseudorandomness B ased on: Jonathan Katz and Yehuda Lindel Introduction to Modern Cryptography.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Network Coding and Information Security Raymond W. Yeung The Chinese University of Hong Kong Joint work with Ning Cai, Xidian University.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
NETWORK CODING. Routing is concerned with establishing end to end paths between sources and sinks of information. In existing networks each node in a.
Analysis of Precoding-based Intersession Network Coding and The Corresponding 3-Unicast Interference Alignment Scheme Jaemin Han, Chih-Chun Wang * Center.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
1 The edge removal problem Michael Langberg SUNY Buffalo Michelle Effros Caltech.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
1 The Encoding Complexity of Network Coding Michael Langberg California Institute of Technology Joint work with Jehoshua Bruck and Alex Sprintson.
The High, the Low and the Ugly Muriel Médard. Collaborators Nadia Fawaz, Andrea Goldsmith, Minji Kim, Ivana Maric 2.
CSCI 3130: Formal languages and automata theory Andrej Bogdanov The Chinese University of Hong Kong The Cook-Levin.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
Network Coding Tomography for Network Failures
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
CSCI 2670 Introduction to Theory of Computing December 7, 2005.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
1)Effect of Network Coding in Graphs Undirecting the edges is roughly as strong as allowing network coding simplicity is the main benefit 2)Effect of Network.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Network Coding Beyond Network Coding
Richard Anderson Lecture 26 NP-Completeness
Richard Anderson Lecture 25 NP-Completeness
Trevor Brown DC 2338, Office hour M3-4pm
Switching Lemmas and Proof Complexity
Presentation transcript:

1 A Reductionist view of Network Information Theory Michael Langberg SUNY Buffalo

Network Information Theory The field of network communication is a very rich and intriguing field of study. The field of network communication is a very rich and intriguing field of study. There has been great progress over the last decades, on several communication scenarios. Several problems remain open. There has been great progress over the last decades, on several communication scenarios. Several problems remain open. Studies may share at times analytical techniques, however, to some extent, each new problem engenders its own new theory. Studies may share at times analytical techniques, however, to some extent, each new problem engenders its own new theory. Goal of unifying theory, that may explain the commonalities and differences between problems and solutions. Goal of unifying theory, that may explain the commonalities and differences between problems and solutions. 2 s2s2 s1s1 s4s4 s3s3 t2t2 t1t1 t4t4 t3t3

Towards a unifying theory Individual studies focusing on specific problems have been extremely productive. Individual studies focusing on specific problems have been extremely productive. Different perspective: a “conditional” study of network communication problems. Different perspective: a “conditional” study of network communication problems. Focus on connections: compare different comm. problems through the lens of reductions. Focus on connections: compare different comm. problems through the lens of reductions. We can connect between problems without explicitly knowing either of their solutions. We can connect between problems without explicitly knowing either of their solutions. 3 s2s2 s1s1 s4s4 s3s3 t2t2 t1t1 t4t4 t3t3 s2s2 s1s1 s4s4 s3s3 t2t2 t1t1 t4t4 t3t3 N1N1 N2N2

Overview Reductions. Reductions. Preliminaries: Network Coding. Preliminaries: Network Coding. Simplifying the NC model. Simplifying the NC model. Is NC hard? Is NC hard? Reliable and Secure communication. Reliable and Secure communication. Can NC help solve other problems as well? Can NC help solve other problems as well? 4

Reductions Definition. Definition. Example 1. Example 1. Example 2. Example 2. Example 3. Example 3. 5

Reductions can show that a problem is easy. Reductions can show that a problem is easy. Reductions can show that a problem is hard. Reductions can show that a problem is hard. Reductions allow propagation of proof techniques. Reductions allow propagation of proof techniques. Study of reduction raise new questions. Study of reduction raise new questions. Study of reductive arguments identify central problems. Study of reductive arguments identify central problems. Provides a framework for generating a taxonomy. Provides a framework for generating a taxonomy. Have the potential to unify and steer future studies. Have the potential to unify and steer future studies. This talk: reductive studies 6 Index Coding/Network Coding. Index Coding/Interference Alignment. Multiple Unicast vs. Multiple Multicast NC. Network Equivalence. Secure Communication vs. MU NC. Reliable Communication vs. MU NC. 2 Unicast vs. K Unicast NC. Index Coding/Distributed storage. … N1N1 N2N2

Directed network N. Directed network N. Source vertices S. Source vertices S. Terminal vertices T. Terminal vertices T. Set of requirements: Set of requirements: Transfer information from S i to T j. Objective: Objective: Design information flow that satisfies requirements. 7 Noiseless networks: network coding S1S1 T2T2 T1T1 T3T3 S2S2

8 Communication Communication at rate R = (R 1,…,R k ) is achievable over instance (N,{(s i,t i )} i ) with block length n if:   random variables {S i },{X e }: Rate: Source S i = R.V. independent and uniform with H(S i )=R i n. Rate: Source S i = R.V. independent and uniform with H(S i )=R i n. Edge capacity: For each edge e of cap. c e : X e = R.V. in [2 c e n ]. Edge capacity: For each edge e of cap. c e : X e = R.V. in [2 c e n ]. Functionality: for each edge e we have f e = function from incoming R.V.’s X e1,…,X e,in(e) to X e (i.e., X e =f e (X e1,…,X e,in(e) )). Functionality: for each edge e we have f e = function from incoming R.V.’s X e1,…,X e,in(e) to X e (i.e., X e =f e (X e1,…,X e,in(e) )). Decoding: for each terminal T i we define Decoding: for each terminal T i we define a decoding function yielding S i. a decoding function yielding S i. Communication is successful with probability 1-  over {S i } i : R=(R 1,…R k ) is ”( ,n)-feasible” if comm. is achievable. R=(R 1,…R k ) is ”( ,n)-feasible” if comm. is achievable. S2S2 S1S1 S4S4 S3S3 T2T2 T1T1 T4T4 T3T3 X1X1 X2X2 X3X3 XeXe fefe Each S i transmits one of 2 R i n messages. R=(R 1,…R k ) feasible: for all  >0 exist n: ( ,n)-feasible. Capacity: closure of all feasible R.

Examples Example 1. Example 1. Example 2. Example 2. 9

Index Coding [Birk,Bar-Yossef et al.] IC is a special case of NC A set S of sources. A set T of terminals. Each terminal has some subset of sources (as side info.) and wants some subset of sources. Broadcast link has capacity c B. Other links have unlimited cap. Objective: To satisfy all terminals. using broadcast rate c B. s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 cBcB

M-Multicat to M-Unicast [Dougherty Zeger] [Dougherty Zeger] [Wong Langberg Effros] [Wong Langberg Effros] [Kamath Tse Wang] [Kamath Tse Wang] 11

Third step: Reduce to Multiple Unicast Third step: Reduce to Multiple Unicast Network Coding [Dougherty Zeger]. Network Coding [Dougherty Zeger]. Linear Index Coding [Maleki Cadambe Jafar]. Linear Index Coding [Maleki Cadambe Jafar]. General (noisy) networks including IC [Wong Langberg Effros]. General (noisy) networks including IC [Wong Langberg Effros]. Multiple Unicast Index Coding NC MU Index Coding Index Coding Zero error MU Index Coding [Langberg Effros] 12

13 Simplifying topology Step 1: Present reduction from NC to IC. Step 1: Present reduction from NC to IC. Step 2: Equivalence for linear and general encoding/decoding. [ElRouayhebSprintsonGeorghiades], [EffrosElRouayhebLangberg]. Step 2: Equivalence for linear and general encoding/decoding. [ElRouayhebSprintsonGeorghiades], [EffrosElRouayhebLangberg]. s1s1 t2t2 t1t1 s2s2 s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 NCIC Theorem: For any NC, R one can construct IC, R’ such that for any n: NC is (R,n)-feasible iff IC is (R’,n)-feasible.

The reduction NC sources Network: edges edges NC terminals NC sources NC edges NC term. NC edges NCIC Index Coding instance: Index Coding instance: Sources corresponding to NC sources, and NC edges. Sources corresponding to NC sources, and NC edges. Terminals corresponding to NC term., NC edges, special terminal. Terminals corresponding to NC term., NC edges, special terminal. For edge e: terminal t e in IC wants IC source X e and has as side information all IC sources incoming to e in NC. For edge e: terminal t e in IC wants IC source X e and has as side information all IC sources incoming to e in NC. IC encodes topology of NC in its terminals! X1X1X1X1 X2X2X2X2 X3X3X3X3 XeXeXeXe

The reduction in more detail NC sources Network: edges edges NC terminals NC sources NC edges NC term. NC edges NCIC Sources: |S|+|E| sources, one for each source of NC and one for each edge of NC: {S i ’} and {S e ’}. Sources: |S|+|E| sources, one for each source of NC and one for each edge of NC: {S i ’} and {S e ’}. Terminals: |T|+|E|+1 terminals: Terminals: |T|+|E|+1 terminals: One terminal t i ’ for each t i : wants S i ’ and has {S e ’} for e in In(t i ). One terminal t i ’ for each t i : wants S i ’ and has {S e ’} for e in In(t i ). t e ’ for each edge e: wants S e ’ and has {S a ’} for edge a in In(e). t e ’ for each edge e: wants S e ’ and has {S a ’} for edge a in In(e). One special terminal t all : wants {S e ’} and has {S i ’}. One special terminal t all : wants {S e ’} and has {S i ’}. X1X1X1X1 X2X2X2X2 X3X3X3X3 titititi

The reduction in more detail NC sources Network: edges edges NC terminals NC sources NC edges NC term. NC edges Sources: |S|+|E| sources, one for each source of NC and one for each edge of NC: {S i ’} and {S e ’}. Sources: |S|+|E| sources, one for each source of NC and one for each edge of NC: {S i ’} and {S e ’}. Terminals: |T|+|E|+1 terminals: Terminals: |T|+|E|+1 terminals: One for each terminal t i : wants S i ’ and has {S e ’} for e in In(t i ). One for each terminal t i : wants S i ’ and has {S e ’} for e in In(t i ). One for each edge e: wants S e ’ and has {S a ’} for edges a in In(e). One for each edge e: wants S e ’ and has {S a ’} for edges a in In(e). One special terminal t all : wants {S e ’} and has {S i ’}. One special terminal t all : wants {S e ’} and has {S i ’}. Bottle neck edge of capacity c B =  c e. Bottle neck edge of capacity c B =  c e. Given rate vector R=(R 1,…,R k ) we construct rate vector R’=({R i ’};{R e ’}): Given rate vector R=(R 1,…,R k ) we construct rate vector R’=({R i ’};{R e ’}): R i ’=R i and R e ’=c e. R i ’=R i and R e ’=c e. HasWants ti’ti’{S e ’} for e in In(t i ).Si’Si’ te’te’{S a ’} for a in In(e).Se’Se’ t all {S i ’}{S e ’} Theorem: For any NC, R one can construct IC, R’ such that for any n: NC is (R,n)-feasible iff IC is (R’,n)-feasible. ReductionNCIC SourcesS 1,…,S k {S i ’}, {S e ’} Terminalst 1,…,t k {t i ’},{t e ’},t all Capacitiescece cB=cecB=ce RateR 1,…,R k {R i ’}, {R e ’} R i ’=R i R e ’=c e X1X1X1X1 X2X2X2X2 X3X3X3X3 titititi

Theorem Theorem: Theorem: NC is (R,n)-feasible iff IC is (R’,n)-feasible. NC sources NC edges NC terminals NC sources NC edges NC term. NC edges ReductionNCIC SourcesS 1,…,S k {S i ’}, {S e ’} Terminalst 1,…,t k {t i ’},{t e ’},t all Capacitiescece cB=cecB=ce RateR 1,…,R k {R i ’}, {R e ’} R i ’=R i R e ’=c e s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 s1s1 t2t2 t1t1 s2s2 HasWants ti’ti’{S e ’} for e in In(t i ).Si’Si’ te’te’{S a ’} for a in In(e).Se’Se’ t all {S i ’}{S e ’}

Scalar Linear Coding Q: Given an instance G with requirements R=[r ij ], can one determine if instance has scalar linear capacity of 1. Think of first capacity definition. Think of first capacity definition. Each source holds single character to be transmitted. Each source holds single character to be transmitted. A: “No” [Lehman Lehman]. NP-hard to determine scalar linear feasibility (C=1). NP-hard to determine scalar linear feasibility (C=1). We are not even asking to find a network code!! We are not even asking to find a network code!! 18 Proof Technique by reduction: Show that solving the problem at hand efficiently will enable the efficient solution of a “hard” problem. Instance to hard problem  Network Coding instance Instance to hard problem  Network Coding instance Solution to hard problem  Solution to NC problem Solution to hard problem  Solution to NC problem s1s1 t2t2 t1t1 t3t3 s2s2

Scalar Linear Coding Q: Given an instance G with requirements R=[r ij ], can one determine if instance is linearly feasible when each source holds single character to be transmitted. Think of first capacity definition and require capacity = 1. Think of first capacity definition and require capacity = 1. A: “No” [Lehman Lehman]. NP-hard to determine feasibility. NP-hard to determine feasibility. We are not even asking to find a network code!! We are not even asking to find a network code!! Reduction from the 3-SAT problem. Reduction from the 3-SAT problem. 3-SAT: given 3-CNF formula, determine if satisfiable. 3-SAT: given 3-CNF formula, determine if satisfiable. 3-SAT is a classical NP-Complete problem. 3-SAT is a classical NP-Complete problem. 19 Proof Technique by reduction: Show that solving the problem at hand will solve a problem considered to be hard. s1s1 t2t2 t1t1 t3t3 s2s2

Given 3-SAT instance  [Lehman Lehman] construct network coding instance (G,R) such that: Given 3-SAT instance  [Lehman Lehman] construct network coding instance (G,R) such that: Associate 2 sources with each variable corr. to TRUE and FALSE. Associate 2 sources with each variable corr. to TRUE and FALSE. Single terminal with each clause. Single terminal with each clause. With each clause associate a subgraph and terminal requirements. With each clause associate a subgraph and terminal requirements. For For Reduction works:  is satisfiable iff (G,R) is feasible. Reduction works:  is satisfiable iff (G,R) is feasible. Scalar Linear Coding 20 [Lehman Lehman] Proof Technique by reduction: Instance to hard problem  Network Coding instance Instance to hard problem  Network Coding instance Solution to hard problem  Solution to NC problem Solution to hard problem  Solution to NC problem

Two sources for each variable. For For Sink needs M j, M k, M l. Sink needs M j, M k, M l. Each pair of sources s j needs to “pick” a TRUTH value and send it to r j. Each pair of sources s j needs to “pick” a TRUTH value and send it to r j. Sink gets “chosen” information of r j,r k,r l. Sink gets “chosen” information of r j,r k,r l. Sink gets arbitrary source info. from u i and v i. Sink gets arbitrary source info. from u i and v i. From u i and v i sink can get two out of the three. From u i and v i sink can get two out of the three. Needs to get at least one from r’s. Needs to get at least one from r’s. 21 _ SAT  feasible (easy). SAT  feasible (easy). Feasible  SAT (needs proof). Feasible  SAT (needs proof). For scalar linear mixing For scalar linear mixing [Lehman Lehman] Scalar Linear Coding Conclusion: NC instance feasible iff formula is satisfiable. Conclusion: NC instance feasible iff formula is satisfiable. NP-hard to determine if instance is feasible (scalar linear). NP-hard to determine if instance is feasible (scalar linear).

22 What about approximately finding capacity? Up to now: Finding Scalar-Linear NC that obtains capacity is NP-hard. Up to now: Finding Scalar-Linear NC that obtains capacity is NP-hard. Question: Is it easy to find a Scalar Linear NC that enables communication at rate 50% the capacity? Question: Is it easy to find a Scalar Linear NC that enables communication at rate 50% the capacity? NO! “Hard” to find a Scalar Linear NC that enables communication within any constant factor of capacity. NO! “Hard” to find a Scalar Linear NC that enables communication within any constant factor of capacity. Main idea: Use Index Coding and connection to the clique cover [LS]. Main idea: Use Index Coding and connection to the clique cover [LS]. Previous two constructions do not extend when trying to find NC that approximately meet capacity. Previous two constructions do not extend when trying to find NC that approximately meet capacity %

Secure NC 23

This work Error correction in Network Coding. Error correction in Network Coding. Objective: coding against jammer controlling links. Objective: coding against jammer controlling links. Look at simple open problem. Look at simple open problem. Single source, single terminal. Single source, single terminal. Acyclic networks. Acyclic networks. All edges have unit capacities. All edges have unit capacities. Adversary controls single link. Adversary controls single link. Some edges cannot be jammed. Some edges cannot be jammed. What is the communication rate? What is the communication rate? 24 s t Up to now: well understood!

Related example Similar setting was studied for wiretap adversaries [ Similar setting was studied for wiretap adversaries [HuangHo LangbergKliewer; Chan Grant]. Well understood: Multicast; uniform links; with single source generating randomness. Not well understood: Multiple nodes generate randomness. Consider simple setting: Consider simple setting: Single source/terminal; acyclic; uniform edge cap.; 1 wiretaped edge; any node can generate randomness: 25 Determining secure capacity is as hard as determining the MU network coding capacity.

Results Study: acyclic networks, single source, single terminal, adversary controls single link, edges have unit capacities; some edges cannot be jammed. Study: acyclic networks, single source, single terminal, adversary controls single link, edges have unit capacities; some edges cannot be jammed. Show: computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Show: computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 26 Proof: by reduction s t

What next? Computing error correcting capacity is as hard as computing the capacity of MU Network Coding. Computing error correcting capacity is as hard as computing the capacity of MU Network Coding. Present proof ideas for zero error communication. Subtleties for standard communication (asymptotic error, asymptotic rate). 27

Zero error case Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 28 Input: MU NC problem N. Input: MU NC problem N. Q: is rate tuple (1,1,…,1) achievable w/ 0 error? Q: is rate tuple (1,1,…,1) achievable w/ 0 error? Reduction: construct new network N’. Reduction: construct new network N’. Can jam any single link except links leaving s and entering t. Can jam any single link except links leaving s and entering t. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. N’

Zero error case Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 29 Can jam any single link except links leaving s and entering t. Can jam any single link except links leaving s and entering t. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. Assume (1,1,…,1) on N. Assume (1,1,…,1) on N. Source sends info. on links a i. Source sends info. on links a i. One error may occur. One error may occur. B i decodes based on majority. B i decodes based on majority. Single error will not corrupt. Single error will not corrupt. Rate k is possible on N’. Rate k is possible on N’.

Zero error case Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 30 Can jam any single link except links leaving s and entering t. Can jam any single link except links leaving s and entering t. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. Thm: (1,1,…,1) achievable on N iff rate k is achievable on N’. Assume rate k achievable on N’. Assume rate k achievable on N’. Want to show (1,1,…,1) on N. Want to show (1,1,…,1) on N. Operating at full rate (cut set): 1-1 between message M; a 1 …a k ; b 1 …b k Operating at full rate (cut set): 1-1 between message M; a 1 …a k ; b 1 …b k Claim (error correction): For M 1 ≠M 2, if b i (M 1 )≠b i (M 2 ) then: Claim (error correction): For M 1 ≠M 2, if b i (M 1 )≠b i (M 2 ) then: z i ’(M 1 )≠z i ’(M 2 ). Corresponds to M 1. Corresponds to M 2. M

Zero error case Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 31 Assume rate k achievable on N’. Assume rate k achievable on N’. Want to show (1,1,…,1) on N. Want to show (1,1,…,1) on N. Operating at full rate (cut set): 1-1 between message M; a 1 …a k ; b 1 …b k Operating at full rate (cut set): 1-1 between message M; a 1 …a k ; b 1 …b k Claim (error correction): For M 1 ≠M 2, if b i (M 1 )≠b i (M 2 ): Claim (error correction): For M 1 ≠M 2, if b i (M 1 )≠b i (M 2 ): z i ’(M 1 )≠z i ’(M 2 ). Assume otherwise: z i ’(M 1 )=z i ’(M 2 ). Consider 2 settings. Terminal cannot distinguish between M 1 and M correspondence between b i – z’ i. Corresponds to M 1. Corresponds to M 2. M 1 transmitted + error on x 1. M 2 transmitted + error on y 1. Cut value is equal! B 1 cannot distinguish between M 1 and M 2.

Zero error case Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. Computing capacity is as hard as computing the capacity of Multiple Unicast Network Coding. 32 Assume rate k achievable on N’. Assume rate k achievable on N’. Want to show (1,1,…,1) on N. Want to show (1,1,…,1) on N. Operating at full rate: 1-1 between message M; a 1 …a k ; b 1 …b k Operating at full rate: 1-1 between message M; a 1 …a k ; b 1 …b k 1-1 correspondence between b i – z’ i 1-1 correspondence between b i – z’ i Same technique: 1-1 correspondence between a i -x i -y i -z i Same technique: 1-1 correspondence between a i -x i -y i -z i Also 1-1 correspondence b i - x i. Also 1-1 correspondence b i - x i. All in all: 1-1 between z i -x i -b i -z’ i. All in all: 1-1 between z i -x i -b i -z’ i. Implies connection z i -z i ’: Multiple Uni. Implies connection z i -z i ’: Multiple Uni.

Network equivalence First explicit reductive paradigm to network communication [Koetter Effros Médard]. First explicit reductive paradigm to network communication [Koetter Effros Médard]. “Simple” network : replace individual independent memoryless components by corresponding noiseless components (i.e., Network Coding). “Simple” network : replace individual independent memoryless components by corresponding noiseless components (i.e., Network Coding). 33 N N in N out “simple” network “complex” network

Example: upper bound Replace independent memoryless (noisy) components by upper bounding noiseless components. Replace independent memoryless (noisy) components by upper bounding noiseless components. Replace noisy component by Network Coding component. Replace noisy component by Network Coding component. Prove: any rate tuple R in capacity region of original network is also in that of upper bounding network. Prove: any rate tuple R in capacity region of original network is also in that of upper bounding network. 34 NN out “simple” network“complex” network

What is known? Point to point channels [Koetter Effros Médard]. Point to point channels [Koetter Effros Médard]. If is a noisy point to point channel than it can be replaced with a “bit pipe” of corresponding capacity. May sound intuitive but definitely not trivial!: May sound intuitive but definitely not trivial!: Must prove that any coding scheme that allows comm. on N can be converted to one for N out : End to end Network Emulation. Must take into account that the link may appear in middle of network and its output could be used in “crazy” ways. Reliable communication over N does not imply reliable communication over all components of N. 35 NN out Nevertheless: for point to point channels: Preserving component-wise communication Network Emulation [Koetter Effros Médard]

What is known? Multiple source/terminal channels: Multiple source/terminal channels: What if is, e.g., a broadcast channel? In this case (and others) it is known that preserving component- wise communication does not suffice for network emulation. In this case (and others) it is known that preserving component- wise communication does not suffice for network emulation. Major question: Which properties are needed from the bounding component to allow network emulation? Major question: Which properties are needed from the bounding component to allow network emulation? 36 NN out X Y2Y2 Y1Y1 [Koetter Effros Médard] X Y2Y2 Y1Y1

Examples 37

Assume rate (R 1,…,R k ) is achievable on network N. Consider network N\e without edge e of capacity . What can be said regarding the achievable rate on the new network? S2S2S2S2 S1S1S1S1 S4S4S4S4 S3S3S3S3 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 e S2S2S2S2 S1S1S1S1 S4S4S4S4 S3S3S3S3 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 N e N\e The edge removal problem What is the guarantee on loss in rate when experiencing link failure? [HoEffrosJalali]

39 Edge removal What is the loss in rate when removing a  capacity edge? There exist simple instances in which removing an edge of capacity  will decrease each rate by an additive . There exist simple instances in which removing an edge of capacity  will decrease each rate by an additive . E.g.: the butterfly with bottleneck consisting of 1/  edges of capacity . E.g.: the butterfly with bottleneck consisting of 1/  edges of capacity . What is the “price of edge removal” in general? What is the “price of edge removal” in general? S2S2S2S2 S1S1S1S1 S4S4S4S4 S3S3S3S3 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 e T2T2T2T2 S1S1S1S1 S2S2S2S2 T1T1T1T1 R=(1,1) is achievable R=(1- ,1-  ) is achievable S1S1S1S1 S2S2S2S2 S1S1S1S1 S2S2S2S2 S 1 +S 2

S 1,..., S 4 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 N In several special instances: the removal of a  capacity edge causes at most an additive  decrease in rate [HoEffrosJalali]. Multicast:   decrease in rate. Collocated sources:   decrease in rate. Linear codes:   decrease in rate. Is this true for all NC instances? Is the decrease in rate continuous as a function of  ? Price of “edge removal” Seemingly simple problem: but currently open.

In the case of noisy networks, the edge removal statement does not hold. Adversarial noise (jamming): Point to point communication. Adding a side channel of negligible capacity allows to send a hash of message x between X and Y. Turning list decoding into unique decoding [Guruswami] [Langberg]. Significant difference in rate when edge removed. Memoryless noise: Multiple access channel: Adding edges with negligible capacity allows to significantly increase communication rate [Noorzad Effros Langberg Ho]. Edge removal in noisy networks XY xe y=x+e X1X1X1X1 X2X2X2X2 Y p(y|x 1 x 2 ) Cooperation facilitator

Network coding: not known? Even for relaxed statement. Challenge, designing code for N given one for N\{e}. Nevertheless, may study implications if true … or false …even for asymptotic version. Will show implications on: Reliability in network communication. Assumed topology of underlying network. Assumed demand structure in communication. Advantages in cooperation in network communication. What is the price of “edge removal”?

Assume rate (R 1,…,R k ) is achievable on network N with some small probability of error  >0. What can be said regarding the achievable rate when insisting on zero error? What is the cost in rate when assuring zero error of communication as opposed to  error? S2S2S2S2 S1S1S1S1 S4S4S4S4 S3S3S3S3 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 N 1.Reliability: Zero vs  error

44 Reliability: Zero vs  error Can one obtain higher communication rate when allowing an  -error, as opposed to zero-error? In general communication models, when source information is dependent, the answer is YES! [SlepianWolf]. In general communication models, when source information is dependent, the answer is YES! [SlepianWolf]. What about the Network Coding scenario in which source information is independent and network is noiseless? Is there advantage in  over zero error for general NC? X1X1X1X1 X2X2X2X2 Y [Witsenhausen]

What’s known: Multicast: Statement is true [Li Yeung Cai] [Koetter Medard]. Collocated sources: Statement is true [Chan Grant] [Langberg Effros]. Linear codes: Statement is true [Wong Langberg Effros]. Is statement true in general? Is the loss in rate continuous as a function of  ? Price of zero error S 1,..., S 4 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 N

Edge removal  zero error ! Edge removal is true iff zero~ Edge removal is true iff zero~  error in NC. Edge removal  zero error [Chan Grant][Langberg Effros] : Edge removal  zero error [Chan Grant][Langberg Effros] : Assume: Network N is R=(R 1,…R k )–feasible with  error. Assume: Network N is R=(R 1,…R k )–feasible with  error. Assume: Asymptotic edge removal holds. Assume: Asymptotic edge removal holds. Prove: Network N is R- feasible with zero error. Prove: Network N is R-  feasible with zero error. 46

2. Topology of networks. Recent studies have shown that any network coding instance (NC) can be reduced to a simple instance referred to as index coding (IC). [ElRouayheb Sprintson Georghiades], [Effros ElRouayheb Langberg]. Recent studies have shown that any network coding instance (NC) can be reduced to a simple instance referred to as index coding (IC). [ElRouayheb Sprintson Georghiades], [Effros ElRouayheb Langberg]. An efficient reduction that allows to solve NC using any scheme to solve IC. An efficient reduction that allows to solve NC using any scheme to solve IC. 47 s1s1 t2t2 t1t1 t3t3 s2s2 s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 Solve IC Obtain solution to NC NCIC Network communication challenging: combines topology with information. Network communication challenging: combines topology with information. Reduction separates information from topology. Reduction separates information from topology. Index Coding has only 1 network node performs encoding. Index Coding has only 1 network node performs encoding.

Connecting NC to IC Theorem: NC is R-feasible iff IC is R’=f(R) -feasible. Theorem: NC is R-feasible iff IC is R’=f(R) -feasible. Related question: can one determine capacity region of NC with that of IC ? Related question: can one determine capacity region of NC with that of IC ? Surprisingly: currently no! Surprisingly: currently no! Reduction breaks down with closure operation. Reduction breaks down with closure operation. 48 s1s1 t2t2 t1t1 s2s2 s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 Solve IC Obtain solution to NC NCIC Reduction in code design: a code for IC corresponds to a code for NC.

Connecting NC to IC Theorem: NC is R-feasible iff IC is R’=f(R)-feasible. Theorem: NC is R-feasible iff IC is R’=f(R)-feasible. Related question: can one determine capacity region of NC with that of IC ? Related question: can one determine capacity region of NC with that of IC ? 49 s1s1 t2t2 t1t1 s2s2 s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 Solve IC Obtain solution to NC NCIC

Edge removal resolves the Q Can determine capacity region of NC with that of IC 50 s1s1 t2t2 t1t1 s2s2 s1s1s1s1 s2s2s2s2 s3s3s3s3 s4s4s4s4 s5s5s5s5 s6s6s6s6 t1t1t1t1 t2t2t2t2 t3t3t3t3 t4t4t4t4 t5t5t5t5 t6t6t6t6 NCIC [Wong Langberg Effros]

 Zero ~  error in Network Coding. Reduction in capacity vs. reduction in code design. Advantages in cooperation in network communication. Assumed demand structure in communication. “Edge removal” implies:

Let N be a directed acyclic multiple unicast network. Up to now we considered independent sources. In general, if source information is dependent, it is “easier” to communicate (i.e., cooperation). Assume rate (R 1,…,R k ) is achievable when source information S 1,…,S k is slightly dependent: S2S2S2S2 S1S1S1S1 S4S4S4S4 S3S3S3S3 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3  H(S i ) - H(S 1,…,S k )   3. Source dependence What can be said regarding the achievable rate when the source information is independent? What are the rate benefits in shared information/cooperation?

In several cases, there is a limited loss in rate when comparing  -dependent and independent source information [Langberg Effros]. Multicast:   decrease in rate. Collocated sources:   decrease in rate. Is this true for all NC instances? Is the decrease in rate continuous as a function of  ? Price of “independence”. S 1,..., S 4 T2T2T2T2 T1T1T1T1 T4T4T4T4 T3T3T3T3 N  H(S i ) - H(S 1,…,S k )  

Edge removal  Source ind. 54 [Langberg Effros]

 Zero =  error in Network Coding. Reduction in capacity vs. reduction in code design. Advantages in cooperation in network communication. Multiple Unicast NC can be reduced to 2 unicast. “Edge removal” implies:

Recent studies have reduced any network commination instance with multiple multicast demands to a multiple unicast instance. Recent studies have reduced any network commination instance with multiple multicast demands to a multiple unicast instance. Network Coding [Dougherty Zeger] zero error setting. Network Coding [Dougherty Zeger] zero error setting. Linear Index Coding [Maleki Cadambe Jafar]. Linear Index Coding [Maleki Cadambe Jafar]. General (noisy) networks [Wong Langberg Effros]. General (noisy) networks [Wong Langberg Effros]. 4. Network demands 56

For the case of Network Coding one can further reduce to 2 unicast! [ For the case of Network Coding one can further reduce to 2 unicast! [Kamath Tse Wang]. Holds only in limited setting of code design (not capacity) and only for zero error. Holds only in limited setting of code design (not capacity) and only for zero error. Can one determine capacity of multiple multicast networks using 2 unicast networks? Can one determine capacity of multiple multicast networks using 2 unicast networks? Again, reduction breaks down in general setting. Again, reduction breaks down in general setting. Lets connect to edge removal … Lets connect to edge removal … Network demands 57

The asymptotic edge removal statement is true iff the reduction of [ The asymptotic edge removal statement is true iff the reduction of [Kamath Tse Wang] holds in capacity. [Wong Effros Langberg]. Network demands 58 NC: multiple multicast capacity can be determined by 2 unicast capacity.

 Zero =  error in Network Coding. Reduction in capacity vs. reduction in code design. Limited dependence in network coding implies limited capacity advantage. Multiple Unicast NC can be reduced to 2 unicast. All form of slackness are equivalent. Reliability, closure, dependence, edge capacity. “Edge removal” equivalent: