Download presentation
Presentation is loading. Please wait.
Published byBruno Lambert Modified over 9 years ago
1
1 Covert Channels and Anonymizing Networks Ira S. Moskowitz --- NRL Richard E. Newman --- UF Daniel P. Crepeau --- NRL Allen R. Miller --- just hanging out
2
2 Motivation Anonymity --- What do you think/say? optional desire or mandated necessity Our interest is in hiding who is sending what to whom. Yet, even if we have this type of “anonymity” one might still be able to leak info. Is this from a failure to truly obtain anonymity, or is it an inherent flaw in the model/design?
3
3 Covert Channels The information is leaked via a covert channel (which is …) Paranoid threat? Yes, but.... This paper is a first step (for us) in tying anonymity and covert channels together.
4
4 MIXes A MIX is a device intended to hide source/message/destination associations. A MIX can use crypto, delay, shuffling, padding, etc. to accomplish this. Others have studied ways to “beat the MIX” --active attacks to flush the MIX. --passive attacks may study probabilities. You all know this better than I :-)
5
5 Our Scenario MIX Firewalls separating 2 enclaves. Enclave 1 Enclave 2 Eve Alice & Clueless i Timed MIX, total flush per tick Eve: counts # message per tick – perfect sync, knows # Clueless i Clueless i are IID, p = probability that Clueless i does not send a message Alice is clueless w.r.t to Clueless i overt channel --- anonymous covert channel
6
6 Toy Scenario – only Clueless 1 Alice can: not send a message (0), or send (0 c ) Only two input symbols to the (covert) channel What does Eve see? {0,1,2} 0 1 2 0 0c0c Alice Eve p p q q
7
7 Discrete Memoryless Channel 012 0pq0 0c0c 0pq XY anonymizing network X Y A is the random variable representing Alice, the transmitter to the cc X has a prob dist P(X=0) = x P(X=0 c ) = 1-x Y represents Eve prob dist derived from A and channel matrix
8
8 In general P(X = x i ) = p(x i ), similarly p(y k ) H(X) = -∑ i p(x i )log[p(x i )] Entropy of X H(X|Y) = -∑ k p(y k ) ∑ i p(x i |y k )log[p(x i |y k )] Mutual information I(X,Y) = H(X) – H(Y|X) = H(Y)-H(Y|X) (we use the latter) Capacity is the maximum over dist X of I For toy scenario C = max x { - ( pxlogpx +[qx+p(1-x)]log[qx+p(1-x)] +q(1-x)logq(1-x) ) –h(p) } where h(p) = -{ p logp + (1-p) log(1-p) }
9
9
10
10 General Scenario N Clueless i 0 1 N N+1 0 0c0c pNpN qNqN Np N-1 q Nq N-1 p qNqN pNpN......
11
11
12
12 Conclusions 1.Highest capacity when very low or very high clueless traffic 2.Capacity (of p) bounded below by C(0.5) 3.Capacity monotonically decreases to 0 with N 4.C(p) is a continuous function of p 5.Alice’s optimal bias is function of p, and is always near 0.5
13
13 Future Work 1.One MIX firewall –distinguishable receivers 2.Relax IID assumption on Clueless i 3.If Alice has knowledge of Clueless i behavior…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.