Presentation is loading. Please wait.

Presentation is loading. Please wait.

Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005.

Similar presentations


Presentation on theme: "Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005."— Presentation transcript:

1 Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005

2

3

4 Greater throughput Robust against random errors Aha! Network Coding!!!

5

6 ? ? ?

7

8 XavierYvonne Zorba ? ? ?

9 Background Noisy channel models (Shannon,…)  Binary Symmetric Channel p (“Noise parameter”) 0 1 1 C (Capacity) 01 H(p) 0.5

10 Background Noisy channel models (Shannon,…)  Binary Symmetric Channel  Binary Erasure Channel p (“Noise parameter”) 0 1 1 C (Capacity) 0E 1-p 0.5

11 Background Adversarial channel models  “Limited-flip” adversary (Hamming,Gilbert-Varshanov,McEliece et al…)  Shared randomness, private key, computationally bounded adversary… p (“Noise parameter”) 0 1 1 C (Capacity) 01 0.5

12 XavierYvonne ? Zorba ? ? |E| directed unit-capacity links Zorba (hidden to Xavier/Yvonne) controls |Z| links Z. p = |Z|/|E| Xavier and Yvonne share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonne can only perform “simple” computations. Model 1 Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses)

13 XavierYvonne ? Zorba ? ? Model 1 – Xavier/Yvonne’s Goal Knowing |Z| but not Z, to come up with an encoding/decoding scheme that allows a maximal rate of information to be decoded correctly with high probability. “Normalized” rate (divide by number of links |E|)

14 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - Results 0.5

15 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - Results 0.5

16 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - Results 0.5 ? ? ? Probability of error = 0.5

17 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - Results 0.5

18 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - Results 0.5 Eurek a

19 Model 1 - Encoding |E|-|Z| |E| |E|-|Z|

20 Model 1 - Encoding |E|-|Z| |E| MDS Code X |E|-|Z| Block length n over finite field F q |E|-|Z| n(1-ε) x1x1 … n Vandermonde matrix T |E| |E| n(1-ε) T1T1... n Rate fudge-factor “Easy to use consistency information” nεnε Symbol from F q

21 Model 1 - Encoding T |E| |E| n(1-ε) T1T1... “Easy to use consistency information” nεnε

22 Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij j

23 Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij i

24 Model 1 - Transmission … T |E| … T 1... r1r1 r |E| D 11 …D 1|E| D |E|1 …D |E||E| … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’

25 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’

26 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’ D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) ?

27 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph

28 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph 1 2 4 5 3 (Self-loops… not important) T r,D 1 2 3 4 5

29 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D 1 2 3 4 5 Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using T i s on corresponding edges.

30 Model 1 - Proof … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D 1 2 3 4 5 Consistency graph D ij =T j (1)’.1+T j (2)’.r i +…+ T j (n(1- ε))’.r i n(1- ε) D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) ∑ k (T j (k)-T j (k)’).r i k =0 Polynomial in r i of degree n over F q, value of r i unknown to Zorba Probability of error < n/q<<1

31 Variations - Feedback C p 0 1 1

32 Variations – Know thy enemy C p 0 1 1 C p 0 1 1

33 Variations – Random Noise C p 0 CNCN 1 SEPARATIONSEPARATION

34 ? ? ? Model 2 - Multicast

35 p = |Z|/h 0 1 1 C (Normalized by h) Model 2 - Results 0.5 h Z S R1R1 R |T|

36 p = |Z|/h 0 1 1 C (Normalized by h) Model 2 - Results 0.5 R1R1 R |T| S

37 Model 2 – Sketch of Proof R1R1 R |T| S S’ |Z| S’ 2 S’ 1 Lemma 1: There exists an easy random design of network codes such that for any Z of size < h/2, if Z is known, each decoder can decode. Lemma 2: Using similar consistency check arguments as in Model 1, Z can be detected. Easy Hard

38 THE ENDTHE END


Download ppt "Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005."

Similar presentations


Ads by Google