Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005
Greater throughput Robust against random errors Aha! Network Coding!!!
? ? ?
XavierYvonne Zorba ? ? ?
Background Noisy channel models (Shannon,…) Binary Symmetric Channel p (“Noise parameter”) C (Capacity) 01 H(p) 0.5
Background Noisy channel models (Shannon,…) Binary Symmetric Channel Binary Erasure Channel p (“Noise parameter”) C (Capacity) 0E 1-p 0.5
Background Adversarial channel models “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…) Shared randomness, private key, computationally bounded adversary… p (“Noise parameter”) C (Capacity)
XavierYvonne ? Zorba ? ? |E| directed unit-capacity links Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E| Xavier and Yvonne share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonne can only perform “simple” computations. Model 1 Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses)
XavierYvonne ? Zorba ? ? Model 1 – Xavier/Yvonne’s Goal Knowing |Z| but not Z, to come up with an encoding/decoding scheme that allows a maximal rate of information to be decoded correctly with high probability. “Normalized” rate (divide by number of links |E|)
p (“Noise parameter”) C (Capacity) Model 1 - Results 0.5
p (“Noise parameter”) C (Capacity) Model 1 - Results 0.5
p (“Noise parameter”) C (Capacity) Model 1 - Results 0.5 ? ? ? Probability of error = 0.5
p (“Noise parameter”) C (Capacity) Model 1 - Results 0.5
p (“Noise parameter”) C (Capacity) Model 1 - Results 0.5 Eurek a
Model 1 - Encoding |E|-|Z| |E| |E|-|Z|
Model 1 - Encoding |E|-|Z| |E| MDS Code X |E|-|Z| Block length n over finite field F q |E|-|Z| n(1-ε) x1x1 … n Vandermonde matrix T |E| |E| n(1-ε) T1T1... n Rate fudge-factor “Easy to use consistency information” nεnε Symbol from F q
Model 1 - Encoding T |E| |E| n(1-ε) T1T1... “Easy to use consistency information” nεnε
Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij j
Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij i
Model 1 - Transmission … T |E| … T 1... r1r1 r |E| D 11 …D 1|E| D |E|1 …D |E||E| … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’ D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) ?
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph (Self-loops… not important) T r,D
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ T r,D Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using T i s on corresponding edges.
Model 1 - Proof … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ T r,D Consistency graph D ij =T j (1)’.1+T j (2)’.r i +…+ T j (n(1- ε))’.r i n(1- ε) D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) ∑ k (T j (k)-T j (k)’).r i k =0 Polynomial in r i of degree n over F q, value of r i unknown to Zorba Probability of error < n/q<<1
Variations - Feedback C p 0 1 1
Variations – Know thy enemy C p C p 0 1 1
Variations – Random Noise C p 0 CNCN 1 SEPARATIONSEPARATION
? ? ? Model 2 - Multicast
p = |Z|/h C (Normalized by h) Model 2 - Results 0.5 h Z S R1R1 R |T|
p = |Z|/h C (Normalized by h) Model 2 - Results 0.5 R1R1 R |T| S
Model 2 – Sketch of Proof R1R1 R |T| S S’ |Z| S’ 2 S’ 1 Lemma 1: There exists an easy random design of network codes such that for any Z of size < h/2, if Z is known, each decoder can decode. Lemma 2: Using similar consistency check arguments as in Model 1, Z can be detected. Easy Hard
THE ENDTHE END