Download presentation
Presentation is loading. Please wait.
1
Protocol Examples: Key Establishment Anonymity
18739A: Foundations of Security and Privacy Protocol Examples: Key Establishment Anonymity Dilsun Kaynar (Substituting for Anupam Datta) CMU, Fall 2009
2
Outline Just Fast Keying (JFK) Protocols for anonymous communication
Shared secret creation Mutual authentication with identity protection Protection against DoS Protocols for anonymous communication High-latency Chaum Mixes as a building block Low-latency Onion Routing and Tor Hidden location servers
3
Part I: Jast Fast Keying (JFK) Protocol
4
JFK in this course Just Fast Keying (JFK) protocol
State-of-the-art key establishment protocol [Aiello, Bellovin, Blaze, Canetti, Ioannidis, Keromytis, Reingold CCS 2002] “Rational derivation” of the JFK protocol Combine known techniques for shared secret creation, authentication, identity and anti-DoS protection [Datta, Mitchell, Pavlovic Tech report 2002] Modeling JFK in applied pi calculus Specification of security properties as equivalences [Abadi,Fournet POPL 2001] [Abadi, Blanchet, Fournet ESOP 2004] Improves on IKE from IPSec suite (high number of rounds, vulnerable to DoS, complex). Several years after initial adoption IETF, there are non-interoperating implementations Later lecture
5
Design Objectives for Key Exchange
Shared secret Create and agree on a secret which is known only to protocol participants Authentication Participants need to verify each other’s identity Identity protection Eavesdropper should not be able to infer participants’ identities by observing protocol execution Protection against denial of service Malicious participant should not be able to exploit the protocol to cause the other party to waste resources Security property: no one other than participants may have access to the generated key. JFK also aims at simplicity and perfect forward secrecy (guarantees that compromises of a host at time t does not allow an adversary to determine session keys that have been generated and subsequently deleted before time t )
6
Ingredient 1: Diffie-Hellman
A B: ga B A: gb Shared secret: gab Diffie-Hellman guarantees perfect forward secrecy Authentication Identity protection DoS protection G is a finite cyclic group, g is a generating element (can be small), a and b are typically large A passive eavesdropper cannot recover the shared secret. Diffie-Hellman problem (computing g to the power ab from g, g to the power a, and g to the power b) is believed to be hard
7
Ingredient 2: Challenge-Response
A B: m, A B A: n, sigB{m, n, A} A B: sigA{m, n, B} Shared secret Authentication A receives his own number m signed by B’s private key and deduces that B is on the other end; similar for B Identity protection DoS protection Signature based challenge response, standard mechanism for mutual authentication, m and n are fresh numbers
8
DH + Challenge-Response
ISO protocol: A B: ga, A B A: gb, sigB{ga, gb, A} A B: sigA{ga, gb, B} Shared secret: gab Authentication Identity protection DoS protection m := ga n := gb Obtained by substituting Diffie-Hellman exponentials for the fresh terms m and n in the challenge-response protocol.
9
Ingredient 3: Encryption
Encrypt signatures to protect identities: A B: ga, A B A: gb, EK{sigB{ga, gb, A}} A B: EK{sigA{ga, gb, B}} Shared secret: gab Authentication Identity protection (for responder only!) DoS protection B’s identity is protected against passive adversaries.
10
Refresher: Anti-DoS Cookie
Typical protocol: Client sends request (message #1) to server Server sets up connection, responds with message #2 Client may complete session or not (potential DoS) Cookie version: Client sends request to server Server sends hashed connection data back Send message #2 later, after client confirms Client confirms by returning hashed data Need extra step to send postponed message
11
Ingredient 4: Anti-DoS Cookie
“Almost-JFK” protocol: A B: ga, A B A: gb, hashKb{gb, ga} A B: ga, gb, hashKb{gb, ga} EK{sigA{ga, gb, B}} B A: gb, EK{sigB{ga, gb, A}} Shared secret: gab Authentication Identity protection DoS protection? Doesn’t quite work: B must remember his DH exponential b for every connection A cookie mechanism is a standard way of preventing blind denial-of-service attacks. B returns a keyed hash of the Diffie-Hellman exponential that he receives in the first message from A. Only after A returns the cookie in the third message does B create state and perform expensive public key operations. One can use unforgeable “token” to reconstruct state
12
Additional Features of JFK
Keep ga, gb values medium-term, use (ga,nonce) Use same Diffie-Hellman value for every connection (helps against DoS), update every 10 minutes or so Nonce guarantees freshness More efficient, because computing ga, gb, gab is costly Two variants: JFKr and JFKi JFKr protects identity of responder against active attacks and of initiator against passive attacks JFKi protects only initiator’s identity from active attack Responder may keep an authorization list May reject connection after learning initiator’s identity
13
JFKr Protocol [Aiello et al.]
If initiator knows group g in advance Ni, xi xi=gdi I R xr=gdr DH group tr=hashKr(xr,Nr,Ni,IPi) Same dr for every connection Ni, Nr, xr, gr, tr xidr=xrdi=x Ka,e,v=hashx(Ni,Nr,{a,e,v}) derive a set of keys from shared secret and nonces Ni, Nr, xi, xr, tr, ei, hi 2 round-trips, R doesn’t perform expensive operations at message 2. Identities revealed in latter messages ei=encKe(IDi,ID’r,sai,sigKi(Nr,Ni,xr,xi,gr)) hi=hashKa(“i”,ei) er, hr “hint” to responder which identity to use check integrity before decrypting er=encKe(IDr,sar,sigKr(xr,Nr,xi,Ni)) hr=hashKa(“r”,er) real identity of the responder
14
Part II: Protocols for Anonymous Communication
18739A: Foundations of Security and Privacy Part II: Protocols for Anonymous Communication
15
Privacy on Public Networks
Internet is designed as a public network Machines on your LAN may see your traffic, network routers see all traffic that passes through them Routing information is public IP packet headers identify source and destination Even a passive observer can easily figure out who is talking to whom Encryption does not hide identities Encryption hides payload, but not routing information Even IP-level encryption (tunnel-mode IPSec/ESP) reveals IP addresses of IPSec gateways Public networks are vulnerable to traffic analysis
16
Applications of Anonymity (I)
Privacy Hide online transactions, Web browsing, etc. from intrusive governments, marketers and archivists Untraceable electronic mail Corporate whistle-blowers Political dissidents Socially sensitive communications (online AA meeting) Confidential business negotiations Law enforcement and intelligence Sting operations and honeypots Secret communications on a public network
17
Applications of Anonymity (II)
Digital cash Electronic currency with properties of paper money (online purchases unlinkable to buyer’s identity) Anonymous electronic voting Censorship-resistant publishing
18
What is Anonymity? Anonymity is the state of being not identifiable within a set of subjects You cannot be anonymous by yourself! Hide your activities among others’ similar activities Unlinkability of action and identity For example, sender and his are no more related after observing communication than they were before Unobservability (hard to achieve) Any item of interest (message, event, action) is indistinguishable from any other item of interest
19
Attacks on Anonymity Passive traffic analysis Active traffic analysis
Infer from network traffic who is talking to whom To hide your traffic, must carry other people’s traffic! Active traffic analysis Inject packets or put a timing signature on packet flow Compromise of network nodes Attacker may compromise some routers It is not obvious which nodes have been compromised Attacker may be passively logging traffic Better not to trust any individual router Assume that some fraction of routers is good, don’t know which Incentive to share the burden for the sake of anonimity
20
Before spam, people thought anonymous email was a good idea
Chaum’s Mix Early proposal for anonymous David Chaum. “Untraceable electronic mail, return addresses, and digital pseudonyms”. Communications of the ACM, February 1981. Public key crypto + trusted r er (Mix) Untrusted communication medium Public keys used as persistent pseudonyms Modern anonymity systems use Mix as the basic building block Before spam, people thought anonymous was a good idea
21
Basic Mix Design Mix B A C E D {r1,{r0,M}pk(B),B}pk(mix) {r0,M}pk(B),B
{r3,M’}pk(E),E C {r2,{r3,M’}pk(E),E}pk(mix) E D {r4,{r5,M’’}pk(B),B}pk(mix) Mix Adversary knows all senders and all receivers, but cannot link a sent message with a received message
22
Anonymous Return Addresses
M includes {K1,A}pk(mix), K2 where K2 is a fresh public key {r1,{r0,M}pk(B),B}pk(mix) {r0,M}pk(B),B B MIX A K1 is created by A and hence messahe can only be decryptrd by A. K2 is used for the secrecy of M’ . Response MIX {K1,A}pk(mix), {r2,M’}K2 A,{{r2,M’}K2}K1 Secrecy without authentication (good for an online confession service )
23
Mix Cascade Messages are sent through a sequence of mixes
Can also form an arbitrary network of mixes (“mixnet”) Some of the mixes may be controlled by attacker, but even a single good mix guarantees anonymity Pad and buffer traffic to foil correlation attacks
24
Disadvantages of Basic Mixnets
Public-key encryption and decryption at each mix are computationally expensive Basic mixnets have high latency Ok for , not Ok for anonymous Web browsing Challenge: low-latency anonymity network Use public-key cryptography to establish a “circuit” with pairwise symmetric keys between hops on the circuit Then use symmetric decryption and re-encryption to move data messages along the established circuits Each node behaves like a mix; anonymity is preserved even if some nodes are compromised
25
A simple idea: Basic Anonymizing Proxy
Channels appear to come from proxy, not true originator Appropriate for Web connections etc.: SSL, TSL (Lower cost symmetric encryption) Example: The Anonymizer Simple, focuses lots of traffic for more anonymity Main disadvantage: Single point of failure, compromise, attack
26
Another Idea: Randomized Routing
Hide message source by routing it randomly Popular technique: Crowds, Freenet, Onion routing Routers don’t know for sure if the apparent source of a message is the true sender or another router
27
R R R4 R R3 R R1 R R2 R Onion Routing
[Reed, Syverson, Goldschlag ’97] R R R4 R R3 R R1 R R2 Alice R Bob Sender chooses a random sequence of routers Some routers are honest, some controlled by attacker Sender controls the length of the path
28
R2 R4 R3 R1 Route Establishment Alice Bob
{M}pk(B) {B,k4}pk(R4),{ }k4 Notice that as the onion is peeled, Alice establishes symmetric keys with each router {R4,k3}pk(R3),{ }k3 {R3,k2}pk(R2),{ }k2 {R2,k1}pk(R1),{ }k1 Routing info for each link encrypted with router’s public key Each router learns only the identity of the next router
29
Tor Second-generation onion routing network Running since October 2003
Developed by Roger Dingledine, Nick Mathewson and Paul Syverson Specifically designed for low-latency anonymous Internet communications Running since October 2003 100 nodes on four continents, thousands of users “Easy-to-use” client proxy Freely available, can use it for anonymous browsing
30
Tor Circuit Setup (1) Client proxy establish a symmetric session key and circuit with Onion Router #1
31
Tor Circuit Setup (2) Client proxy extends the circuit by establishing a symmetric session key with Onion Router #2 Tunnel through Onion Router #1
32
Tor Circuit Setup (3) Client proxy extends the circuit by establishing a symmetric session key with Onion Router #3 Tunnel through Onion Routers #1 and #2
33
Using a Tor Circuit Client applications connect and communicate over the established Tor circuit Datagrams are decrypted and re-encrypted at each link Client has established a symmetric key with each node in the circuit. It can form an “onion” only with these symmetric keys. At each node the onion is peeled using its symmetric key with Client. This is an improvement over old onions that use public key encryption to move data over the circuit.
34
Tor Management Issues Many applications can share one circuit
Multiple TCP streams over one anonymous connection Tor router doesn’t need root privileges Encourages people to set up their own routers More participants = better anonymity for everyone Directory servers Maintain lists of active onion routers, their locations, current public keys, etc. Control how new routers join the network “Sybil attack”: attacker creates a large number of routers Directory servers’ keys ship with Tor code
35
Location Hidden Servers
Goal: deploy a server on the Internet that anyone can connect to without knowing where it is or who runs it Accessible from anywhere Resistant to censorship Can survive full-blown DoS attack Resistant to physical attack Can’t find the physical server!
36
Creating a Location Hidden Server
Server creates onion routes to “introduction points” Client obtains service descriptor and intro point address from directory Server gives intro points’ descriptors and addresses to service lookup directory
37
Using a Location Hidden Server
Client creates onion route to a “rendezvous point” Rendezvous point mates the circuits from client & server If server chooses to talk to client, connect to rendezvous point Client sends address of the rendezvous point and any authorization, if needed, to server through intro point
38
Deployed Anonymity Systems
Free Haven project has an excellent bibliography on anonymity Linked from the reference section of course website Tor ( Overlay circuit-based anonymity network Best for low-latency applications such as anonymous Web browsing Mixminion ( Network of mixes Best for high-latency applications such as anonymous
39
Dining Cryptographers
Clever idea how to make a message public in a perfectly untraceable manner David Chaum. “The dining cryptographers problem: unconditional sender and recipient untraceability.” Journal of Cryptology, 1988. Guarantees information-theoretic anonymity for message senders This is an unusually strong form of security: defeats adversary who has unlimited computational power Impractical, requires huge amount of randomness In group of size N, need N random bits to send 1 bit
40
Three-Person DC Protocol
Three cryptographers are having dinner. Either NSA is paying for the dinner, or one of them is paying, but wishes to remain anonymous. Each diner flips a coin and shows it to his left neighbor. Every diner will see two coins: his own and his right neighbor’s Each diner announces whether the two coins are the same. If he is the payer, he lies (says the opposite). Odd number of “same” NSA is paying; even number of “same” one of them is paying But a non-payer cannot tell which of the other two is paying! About 3. Odd number of “same” arises only if everyone tells the truth (non-payers tell the truth)
41
Non-Payer’s View: Same Coins
“different” ? “same” “different” ? payer payer Even number of “same” so I know someone is paying, but who? I cannot see inside the cloud. If it were H, then the one on the left must be lying, if T then the one on the right. But H and T are equally likely. Without knowing the coin toss between the other two, non-payer cannot tell which of them is lying
42
Non-Payer’s View: Different Coins
“same” “same” ? “same” ? payer payer Without knowing the coin toss between the other two, non-payer cannot tell which of them is lying
43
Superposed Sending This idea generalizes to any group of size N
For each bit of the message, every user generates 1 random bit and sends it to 1 neighbor Every user learns 2 bits (his own and his neighbor’s) Each user announces own bit XOR neighbor’s bit Sender announces own bit XOR neighbor’s bit XOR message bit XOR of all announcements = message bit Every randomly generated bit occurs in this sum twice (and is canceled by XOR), message bit occurs once
44
DC-Based Anonymity is Impractical
Requires secure pairwise channels between group members Otherwise, random bits cannot be shared Requires massive communication overhead and large amounts of randomness DC-net (a group of dining cryptographers) is robust even if some members collude Guarantees perfect anonymity for the other members
45
Acknowledgement Part 1 of this lecture was based on slides by Anupam Datta Part 2 of this lecture was based on slides by Vitaly Shmatikov
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.