Download presentation
Presentation is loading. Please wait.
1
FIGHTING ADVERSARIES IN NETWORKS Michelle Effros Michael Langberg Tracey Ho Philip Chou Kamal Jain Muriel Médard Dina Katabi Peter Sanders Ludo Tolhuizen Sebastian Egner Sidharth Jaggi (MIT)
2
Network Coding “Justification” R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung, "Network information flow," IEEE Trans. on Information Theory, vol. 46, pp. 1204-1216, 2000. http://tesla.csl.uiuc.edu/~koetter/NWC/Bibliography.html ≈ 200 papers in 3 years NetCod Workshops, DIMACS working group, ISIT 2005 - 4+ sessions, tutorials, … Several patents, theses…
3
“The core notion of network coding is to allow and encourage mixing of data at intermediate network nodes. “ (Network Coding Homepage) Network Coding... what is it?
4
Justifications - I s t1t1 t2t2 b1b1 b2b2 b2b2 b2b2 b1b1 b1b1 ? b1b1 b1b1 b1b1 b1b1 (b 1,b 2 ) b 1 +b 2 (b 1,b 2 ) [ACLY00] Throughput
5
Gap Without Coding... Coding capacity = h Routing capacity≤2 [JSCEEJT05] s
6
Multicasting Webcasting P2P networks Sensor networks s1s1 t1t1 t2t2 t |T| Network s |S|
7
Background Upper bound for multicast capacity C, C ≤ min{C i } s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Network [ACLY00] - achievable! [LYC02] - linear codes suffice!! [KM01] - “finite field” linear codes suffice!!!
8
Background b1b1 b2b2 bmbm β1β1 β2β2 βkβk F(2 m )-linear network [KM01] Source:- Group together `m’ bits, Every node:- Perform linear combinations over finite field F(2 m )
9
Background s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Network [ACLY00] - achievable! [LYC02] - linear codes suffice!! [KM01] - “finite field” linear codes suffice!!! [JCJ03],[SET03] - polynomial time code design!!!! [HKMKE03],[JCJ03] - random distributed code design!!!!!
10
Justifications - II s t1t1 t2t2 One link breaks Robustness/Distributed design
11
Justifications - II s t1t1 t2t2 b1b1 b2b2 b2b2 b2b2 b1b1 b1b1 (b 1,b 2 ) b 1 +b 2 Robustness/Distributed design (b 1,b 2 ) b 1 +2b 2 (Finite field arithmetic) b 1 +b 2 b 1 +2b 2
12
Random Robust Codes s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Original Network C = min{C i }
13
Random Robust Codes s t1t1 t2t2 t |T| C |T| ' C1'C1' C2'C2' Faulty Network C' = min{C i '} If value of C' known to s, same code can achieve C' rate! (interior nodes oblivious)
14
Random Robust Codes Choose random [ß] at each node Decentralized design Percolate overall transfer function down network With high probability, invertible
15
Justifications - III s t1t1 t2t2 Security Evil adversary hiding in network eavesdropping, injecting false information [JLHE05],[JLHKM06?]
16
Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij j
17
Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij i
18
Model 1 - Transmission … T |E| … T 1... r1r1 r |E| D 11 …D 1|E| D |E|1 …D |E||E| … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’
19
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’
20
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’ D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) ?
21
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph
22
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Consistency graph 1 2 4 5 3 (Self-loops… not important) T r,D T 1 2 3 4 5 Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε)
23
Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D T 1 2 3 4 5 Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using T i s on corresponding edges.
24
Model 1 - Proof … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D T 1 2 3 4 5 Consistency graph D ij =T j (1)’.1+T j (2)’.r i +…+ T j (n(1- ε))’.r i n(1- ε) D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) ∑ k (T j (k)-T j (k)’).r i k =0 Polynomial in r i of degree n over F q, value of r i unknown to Zorba Probability of error < n/q<<1
25
Greater throughput Robust against random errors... Aha! Network Coding!!!
27
? ? ?
28
Xavier Yvonne 1 Zorba ? ? ? Yvonne |T| ? ? ?......
29
Setup 1.Scheme X Y Z 2.Network Z 3.Message X Z 4.Code Z 5.Bad links Z 6.Coin X 7.Transmit Y Z 8.Decode Y Eurek a Wired Wireless (packet losses, fading) Eavesdropped links Z I Attacked links Z O Who knows what Stage
30
Xavier Yvonne 1 ? Zorba ? ? Zorba sees M I links Z I, controls M O links Z O p I =M I /C, p O =M O /C Xavier and Yvonnes share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonnes -- “simple” computations Setup Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses) Goal: Transmit at “high” rate and w.h.p. decode correctly Zorba (hidden) knows network; Xavier and Yvonnes don’t C MOMO Yvonne |T| ? ? ? Distributed design (interior nodes oblivious/overlay to network coding)
31
Background Noisy channel models (Shannon,…) Binary Symmetric Channel p (“Noise parameter”) 0 1 1 C (Capacity) 01 H(p) 0.5
32
Background Noisy channel models (Shannon,…) Binary Symmetric Channel Binary Erasure Channel p (“Noise parameter”) 0 1 1 C (Capacity) 0E 1-p 0.5
33
Background Adversarial channel models “Limited-flip” adversary, p I =1 (Hamming,Gilbert-Varshanov,McEliece et al…) Large alphabets (F q instead of F 2 ) Shared randomness, cryptographic assumptions… p O (“Noise parameter”) 0 1 1 C (Capacity) 01 0.5
34
p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 1-p O
35
p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 ? ? ? 0
36
p I =p O (“Noise parameter” = “Knowledge parameter”) 0 1 1 C (Capacity) Unicast [JLHE05] 0.5
37
p O (“Noise parameter”) 0 1 1 C (Capacity) Unicast [Folklore] 0.5 ( “Knowledge parameter” p I =1)
38
p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 ( “Knowledge parameter” p I =1) pOpO pOpO 1-2p O
39
p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 “Knowledge parameter” p I >0.5 ? ? ?
40
p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 “Knowledge parameter” p I >0.5 p O (“Noise parameter”) 0 1 1 C (Capacity) 0.5 “Knowledge parameter” p I <0.5
41
Ignorant Zorba 1.Code (X,Y,Z) 2.Message X p,X s (X) 3.Bad links (Z) 4.Coin (X) 5.Transmission (Y,Z) 6.Decode correctly (Y,Z) I(Z;X s )=0 Eurek a
42
p = |Z|/h 0 1 1 C (Normalized by h) General Multicast Networks 0.5 h Z S R1R1 R |T| Slightly more intricate proof
43
|E|-|Z| |E| |E|-|Z| Unicast - Encoding
44
|E|-|Z| |E| MDS Code X |E|-|Z| Block-length n over finite field F q |E|-|Z| n(1-ε) x1x1 … n Vandermonde matrix T |E| |E| n(1-ε) T1T1... n Rate fudge-factor “Easy to use consistency information” nεnε Symbol from F q Unicast - Encoding
45
… T |E| … T 1... r r nεnε D 1 …D |E| D i =T i (1).1+T i (2).r+…+ T i (n(1- ε)).r n(1- ε) TiTi rDiDi i Unicast - Encoding
46
… T |E| … T 1... r r D 1 …D |E| … T |E| ’ … T 1 ’... r’ D 1 ’…D |E| ’ Unicast - Transmission
47
D i =T i (1)’.1+T i (2)’.r+…+ T i (n(1- ε))’.r n(1- ε) ? If so, accept T i, else reject T i Unicast - Quick Decoding … T |E| ’ … T 1 ’... r r’ D 1 …D |E| D 1 ’…D |E| ’ Choose majority (r,D 1,…,D |E| ) ∑ k (T i (k)-T i (k)’).r k =0 Polynomial in r of degree n over F q, value of r unknown to Zorba Probability of error < n/q<<1 Use accepted T i s to decode
48
Choose random [ß] at each node Decentralized design Percolate overall transfer function down network With high probability, invertible Distributed Design [HKMKE03]
49
t1t1 t |T| S Distributed Design [HKMKE03] y s (j)=Tx s (j) x y1y1 β1β1 βiβi βhβh y |T| x b (i) x s (j) x b (1) x b (h) Rate h=C Block Slice hxh identity matrix x ’ b (i) h<<n T x s (j)=T -1 y s (j)
50
pOpO 0 1 1 C (Normalized by h) 0.5 Achievability - 1 R1R1 R |T| S S’ |Z| S’ 2 S’ 1 Observation 1: Can treat adversaries as new sources
51
y’ s (j)=Tx s (j)+T’x’ s (j) SS Supersource Observation 2: w.h.p. over network code design, {Tx S (j)} and {T’x’ S (j)} do not intersect (robust codes…). Corrupted Unknown Achievability - 1
52
y’ s (j)=Tx s (j)+T’x’ s (j) ε redundancy x s (2)+x s (5)- x s (3)=0 y s (2)+y s (5)-y s (3)= vector in {T’x’ s (j)} { T’x’ s (j)} { Tx s (j)} x s (3)+2x s (9)-5 x s (1)=0 y s (3)+2y s (9)-5y s (1)= another vector in {T’x’ s (j)} Achievability - 1
53
y’ s (j)=Tx s (j)+T’x’ s (j) ε redundancy { T’x’ s (j)} { Tx s (j)} Repeat M O times Discover {T’x’ s (j)} “Zero out” {T’x’ s (j)} Estimate T (redundant x s (j) known) Decode Achievability - 1
54
y’ s (j)=Tx s (j)+T’x’ s (j) x s (2)+x s (5)- x s (3)=0 y s (2)+y s (5)-y s (3)= vector in {T’x’ s (j)} x’ s (2)+x’ s (5)-x’ s (3)=0 y s (2)+y s (5)-y s (3)= 0 Achievability - 1
55
Secret Uncorrupted ε -rate Channels Useful abstraction [r,(∑ j x s (j)r j )] Secret, correct hashes of x s (j) Zorba doesn’t know how to hide Will return to this…
56
Achievability - 2 “Distributed Network Error-correcting Code” ( Knowledge parameter p I >0.5) [CY06] – bounds, high complexity construction [JHLMK06?] – tight, poly-time construction p O (“Noise parameter”) 0 1 1 C (Capacity) 0.5
57
pOpO pOpO y’ s (j)=Tx s (j)+T’x’ s (j) error vector 1-2p O Achievability - 2
58
y’ s (j)=T’’x s (j)+T’x’ s (j) Achievability - 2 T’’
59
y’ s (j)=T’’x s (j)+T’x’’ s (j) e e e’ Achievability - 2 T’’
60
y’ s (j)=Tx s (j)+T’x’ s (j) Achievability - 2 y’ s (j)=(T+T’L)x s (j)+T’(x’ s (j)-Lx s (j)) y’ s (j)=T’’x s (j)+T’x’’ s (j) T’’ known Any set of M O +1 {x’’ s (j)}s linearly dependent Let T’x’’ s (1) = a(1),…,T’x’’ s (M O )=a(M O ) A=[a(1)…a(M O )] y’ s (j)=T’’x s (j)+Ac(j) known Linearized equation, Size of A finite, Redundancy
61
M I +2M O <C M I <C-2M O Network error-correcting codes Zorba’s observations Using network error-correcting codes as small header, can transmit secret, correct information… … which can be used for first scheme! Achievability - 1.5 Not quite 2M O <C, 2M I <C
62
Working on it… “Slightly” non-linear codes Achievability - 1 2M O <C, 2M I <C Use fact that T, T’ in general unknown to adversary
63
p (“Noise parameter”) 0 1 1 C (Capacity) Ignorant Zorba - Results 0.5 1 X p +X s XsXs 1-2p
64
Overview Hidden, eavesdropping, malicious, computationally unbounded adversary Network topology unknown Polynomial time decoding overlaid on network code, achieves “almost optimal” performance
65
p (“Noise parameter”) 0 1 1 C (Capacity) Ignorant Zorba - Results 0.5 1 X p +X s XsXs 1-2p a+b+c a+2b+4c a+3b+9c MDS code
66
THE ENDTHE END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.