Presentation is loading. Please wait.

Presentation is loading. Please wait.

IPAM ’06 Tutorial Security and Composition of Cryptographic Protocols Ran Canetti IBM Research.

Similar presentations


Presentation on theme: "IPAM ’06 Tutorial Security and Composition of Cryptographic Protocols Ran Canetti IBM Research."— Presentation transcript:

1 IPAM ’06 Tutorial Security and Composition of Cryptographic Protocols Ran Canetti IBM Research

2

3 Nice sun...

4 yeah...

5 You know, I lost more than you in the stock market.

6 No way. How much did you lose?

7 I won’t tell you… How much did you lose?

8 You tell first!

9 No, you tell first!

10 No, you tell first!

11 I lost X $ is X>Y? I lost Y $

12 I lost X $ is X>Y? I lost Y $ The millionaires problem [Yao82]

13 Cryptographic protocol problems Two or more parties want to perform some joint computation, while guaranteeing “security” against “adversarial behavior”.

14 Some security concerns Correctness of local outputs: –As a function of all inputs –Distributional and bias guarantees –Unpredictability Secrecy of local data and inputs Privacy Fairness Accountability Availability

15 Cryptographic protocol problems: Secure Communication Two (or more) parties want to communicate “securely”: Authentication: Recipient will accept only messages that were sent by the specified sender. Secrecy: Contents of messages should remain unknown to third parties. Related tasks: Key-exchange: The parties agree on a random key that remains secret to eavesdroppers. Encryption (shared key, public key) Digital signatures (shared key, public key) (Here the adversary is an “external entity”) Very prevalent in practice (SSL,TLS,IPSEC,SSH,PGP,…)

16 Cryptographic protocol problems: Two-party tasks Coin tossing [Blum 82] Generate a common uniformly distributed bit (or string). The output should be “unbiased” and “unpredictable”. Zero-Knowledge [Goldwasser-Micali-Rackoff 85] P proves to V that x L “without revealing extra info”. Commitment [Blum 82] Two stage process: - C gives x to V “in an envelope” (i.e., x is fixed but remains unknown to V). - C enables V to “open the envelope” (i.e., retrieve x). (Here the parties typically dont trust each other.)

17 Cryptographic protocol problems: Multiparty tasks and applications Electronic voting: Correctness, acountability, privacy, coersion-freeness... “E-commerce”: Fairness, accountability –On-line Auctions, trading and financial markets, shopping On-line gambling: Unpredictability, accountability... Computations on databases: Privacy –Private information retrieval –Database pooling Secure distributed storage: Availability, integrity, secrecy –Centrally controlled –Open, peer-to-peer systems

18 Many cryptographic protocols were developed over the years: General constructions (given authenticated comunication): can evaluate any function of the inputs of the parties “in a secure way” [Y86,GMW87,BGW88,RB89,...] Obtaining authenticated and secure communication [DH78,NS78,B+91,BR93...] More efficient constructions for specific problems Deployed systems A plethora of cryptographic protocols

19 But, what does it really mean that a protocol is “secure”? Rigorously capturing the intuitive notion of security is a tricky business… Main stumbling points: Unexpected inter-dependence between security requirements Unexpected bad interactions between different protocol instances in a system ➔ Security is very sensitive to the execution environment.

20 Developing notions of security Initially, notions of security were problem-specific (e.g., Encryption, Coin-tossing, Zero-Knowledge, Signatures...) Subsequently, general frameworks for caprturing security of tasks were developed, e.g. [Goldwasser- Levin90,Micali-Rogaway91,Beaver91,C95,C00, Dodis-Micali00, Pfitzmann-Waidner 00&01, C01, Mitchell-Scedrov-Ramanathan- Teague01, Mateus-Mitchell-Scedrov03, Kuesters06 ] –All of these frameworks follow (in one way or another) a single paradigm: “The trusted party paradigm”.

21 In favor of a general notion of security Provides better understanding of security concerns, various primitives, and the relations among them. Provides a basis for making claims about behavior of protocols in unknown environments: –“A protocol that realizes a task can be used in conjunction with any protocol that uses the task” –“Protocols that realize this task continue to realize it in any execution environment”

22 In this tutorial: Part 1: Basic security Motivate and present the “Trusted party paradigm”. Review a basic formalization of the approach (based on [C, JoC 00]). See examples. Discuss feasibility. Part 2: Security and composition Discuss secure protocol composition: –Show what can go wrong –Discuss settings and requirements Demonstrate the limited compositional properties of the basic definition Part 3: Universally Composable security Present the notion (based on [C, FOCS'01]) Demonstrate secure composability properties Discuss feasibility, possible relaxations. Explore connections to “formal analysis” of protocols.

23 Defining security: First attempts Let's start with a simple setting: Two parties Function evaluation: –There is a known function f –Party P i (i=1,2) has input x i –Both parties wish to “securely” obtain y=f(x 1,x 2 ). The only potentially adversarial entities are the parties themselves.

24 What are the security requirements?

25 Correctness: The honest party (parties) output a correct function value of the inputs of the parties. Secrecy: Each party learns only the function value. (“The view of each party can be generated given only its input and output.”)

26 What are the security requirements? Correctness: The honest party (parties) output a correct function value of the inputs of the parties. Secrecy: Each party learns only the function value. (“The view of each party can be generated given only its input and output.”) Are we done?

27 But... The “function value” depends on the inputs provided by the parties. How to define these inputs? –Given from above? Unrealizable... –Chosen during the run of the protocol? Dangerous...

28 Example Function: f(x 1,x 2 ) = x 1 xor x 2 Protocol: –P 1 sends x 1 to P 2. –P 2 sends x 2 to P 1. –Both parties output x 1 xor x 2. Is the protocol secure? P 2 can decide the function value as a function of x 1. But: the protocol satisfies: –Correctness: The parties output the correct function of the inputs –Secrecy: trivial, since x 1 is computable from x 2 and x 1 xor x 2.

29 Conclusions: The definition should also limit the way the inputs to the computation are chosen. Secrecy and correctness are “weaved together”: –Correctness requires some flavor of secrecy –Secrecy depdends on the correctness

30 Conclusions: The definition should also limit the way the inputs to the computation are chosen. Secrecy and correctness are “weaved together”: –Correctness requires some flavor of secrecy –Secrecy depdends on the correctness What about the case where parties are guaranteed to follow the protocol? Here the inputs are well defined and correctness seems straighforward. Is the definition adequate there?

31 Another example Function: f(x 1,x 2 ) = r R {0,1} k Protocol: –Let f be a one-way permutation on {0,1} k. –P 1 chooses s R {0,1} k, and sends r=f(s) to P 2. –Both parties output r. Is the protocol secure? P 1 knows a “trapdoor information” on r. P 2 cannot feasibly compute this information. (Why is this bad?) But: the protocol satisfies: –Correctness: The output is distributed uniformly in {0,1} k. –Secrecy: Trivial, there are no inputs.

32 Conclusion: The definition should also specify the process by which the outputs are chosen, not just the distribution.

33 The general definitional approach [Goldreich-Micali-Wigderson87] ‘A protocol is secure for some task if it “emulates” an “ideal process” where the parties hand their inputs to a “trusted party”, who locally computes the desired outputs and hands them back to the parties.’ But, how to formalize?

34 A basic formalization (based on [Goldwasser-Levin90,Micali-Rogaway91,Beaver91,C95,C00]) Formalize the process of protocol execution in presence of an adversary Formalize the “ideal process” for realizing the functionality Formalize the notion of “a protocol emulates the ideal process for a functionality.”

35 The model for protocol execution : P1P1 P3P3 P4P4 P2P2 A Participants: Parties P 1 …P n Adversary A, controlling the corrupted parties.

36 The model for protocol execution : P1P1 P3P3 P4P4 P2P2 A The parties and the adversary get external input. Participants: Parties P 1 …P n Adversary A, controlling the corrupted parties.

37 The model for protocol execution : P1P1 P3P3 P4P4 P2P2 A The parties and the adversary get external input. The parties and adversary interact (parties running the protocol, A interferes according to the model.) Participants: Parties P 1 …P n Adversary A, controlling the corrupted parties.

38 The model for protocol execution : P1P1 P3P3 P4P4 P2P2 A The parties and the adversary get external input. The parties and adversary interact (parties running the protocol, A interferes according to the model.) The parites and the adversary generate their local outputs. Participants: Parties P 1 …P n Adversary A, controlling the corrupted parties.

39 The ideal process (for evaluating function F): P1P1 P3P3 P4P4 P2P2 S Participants: Parties P 1 …P n Adversary S, controlling the corrupted parties. The ideal process (for evaluating function F):

40 P1P1 P3P3 P4P4 P2P2 S The parties and the adversary get external input. Participants: Parties P 1 …P n Adversary S, controlling the corrupted parties. The ideal process (for evaluating function F):

41 P1P1 P3P3 P4P4 P2P2 F S The parties and the adversary get external input. The parties and adversary hand Their inputs to a “trusted party”. The trusted party locally evaluates F on the parties' inputs and hands each party Its prescribed output. Participants: Parties P 1 …P n Adversary S, controlling the corrupted parties. The ideal process (for evaluating function F):

42 P1P1 P3P3 P4P4 P2P2 F S The parties and the adversary get external input. The parties and adversary hand Their inputs to a “trusted party”. The trusted party locally evaluates F on the parties' inputs and hands each party Its prescribed output. The parties output the value received from F; the adversary outputs an arbitrary value. Participants: Parties P 1 …P n Adversary S, controlling the corrupted parties. The ideal process (for evaluating function F):

43 A protocol emulates the ideal process for evaluating F if for any (PPT) adversary A there exists a (PPT) adversary S such that for any set of external inputs: The outputs of the uncorr. parties running with A in the ideal process with S The output of A ~ The output of S In this case, we say that securely realizes functionality F. Definition of security ~

44 Correctness : In the ideal process the parties get the “correct” outputs, based on the inputs of all parties. Consequently, the same must happen in the protocol execution (or else the first condition will be violated). Secrecy: In the ideal process the adversary learns nothing other than the outputs of bad parties. Consequently, the same must happen in the protocol execution (or else the second condition will be violated). “Input independence:” The bad parties cannot choose their inputs based on the inputs of the honest parties (since they cannot do so in the ideal process). … Implications of the definition

45 Example: The millionaires functionality 1.Receive (x 1 ) from party P 1 2.Receive (x 2 ) from party P 2 3.Set b = (x 1 >x 2 ). output b to both parties. Note: Both parties are assured that they receive the correct bit Neither party learns any information other than the bit b.

46 Example: The xor function 1.Receive (x 1 ) from party P 1 2.Receive (x 2 ) from party P 2 3.Set b = (x 1 xor x 2 ). output b to both parties.

47 Example: The xor function 1.Receive (x 1 ) from party P 1 2.Receive (x 2 ) from party P 2 3.Set b = (x 1 xor x 2 ). output b to both parties. What about the above “bad protocol”?

48 Reminder: The bad protocol –P 1 sends x 1 to P 2. –P 2 sends x 2 to P 1. –Both parties output x 1 xor x 2.

49 Example: The xor function 1.Receive (x 1 ) from party P 1 2.Receive (x 2 ) from party P 2 3.Set b = (x 1 xor x 2 ). output b to both parties. The above “bad protocol” is no longer secure. Assume x 1 is random: In the protocol execution A (controlling P 2 ) can always force the output of P 1 to be 0. In the ideal process, P's output is always random (since P's input is independent from x 1 ).

50 Example: The coin tossing functionality 1.Receive “start” from P 1 2.Receive “start” from P 2 3.Choose r  R {0,1} k, output r to the parties.

51 Example: The coin tossing functionality 1.Receive “start” from P 1 2.Receive “start” from P 2 3.Choose r  R {0,1} k, output r to the parties. What about the above “bad protocol” ?

52 Reminder: The bad protocol –Let f be a one-way permutation on {0,1} k. –P 1 chooses s R {0,1} k, and sends r=f(s) to P 2. –Both parties output r.

53 Example: The coin tossing functionality 1.Receive “start” from P 1 2.Receive “start” from P 2 3.Choose r  R {0,1} k, output s to the parties. The bad protocol still satisfies the definition: The uncorrupted party (P 2 ) outputs a random value in both cases In the ideal process, S gets r from F. But it can ignore r, and instead choose a random s and output (s,f(s)). This would have the right distribution... So, what's wrong?

54 A protocol emulates the ideal process for evaluating F if for any (PPT) adversary A there exists a (PPT) adversary S such that for any set of external inputs: The outputs of the uncorr. parties running with A in the ideal process with S The output of A ~ The output of S Reminder: Definition of security ~

55 A protocol emulates the ideal process for evaluating F if for any (PPT) adversary A there exists a (PPT) adversary S such that for any set of external inputs: The outputs of the uncorr. parties running with A in the ideal process with S The output of A ~ The output of S Weakness: Need to “tie together” the outputs of the adversary and of the uncorrupted parties. Reminder: Definition of security ~

56 A protocol emulates the ideal process for evaluating F if for any (PPT) adversary A there exists a (PPT) adversary S such that for any set of external inputs: [The outputs of all parties and A] ~ [The outputs of all parties and S] Corrected definition of security

57 Coin tossing once more 1.Receive “start” from P 1 2.Receive “start” from P 2 3.Choose r  R {0,1} k, output r to the parties. The bad protocol no longer satisfies the definition: The “global output” of the protocol execution is (r,f -1 (r)) In the ideal process, S gets r from F; to satisfy the definition, S now has to output f -1 (r)...

58 An alternative formulation: E Ideal process: Protocol execution: P1P1 P3P3 P4P4 P2P2 A P1P1 P3P3 P4P4 P2P2 F S securely realizes F if: For any A there is an S s.t no environment E can tell if it interacts with: - A run with and A - A run with F and S E

59 Another example: The ZK functionality, F ZK (for relation R) 1.Receive (x,w) from party P (“the prover”) 2.Receive (x) from party V (“the verifier”) 3.Output R(x,w) to V. Note: V is assured that it accepts only if R(x,w)=1 (soundness) P is assured that V learns nothing but R(x,w) (Zero-Knowledge)

60 Comparison with the traditional formulation Traditionally ZK is defined using three separate requirements (Completeness, Soundness, Zero-Knowledge) Here there is an apparent “proof of knowledge” requirement Still the two formalizations are “essentially equivalent”: Theorem: A protocol securely realizes F ZK for relation R if and only if it is a computationally sound ZK proof of knowledge for R (with non-black-box extractors). (Assume that there is an input # s.t. R(x,#)=0 for all x, else augment R accordingly.)

61 Additional “definitional details”: What is the model of communication: –Asynchronous? Synchronous? –Secret? Authenticated? Unauthenticated? –Point-to-point? broadcast? How do parties address each other? Can we model “open systems” where parties join during the computation? Can we model reactive tasks? How to model PPT computation in a meaningful way? Need a better model of computation...

62 The basic model: Highlights (based on [C, iacr eprint 2000/067, Dec 05]) The basic computing unit: PPT Interactive TM –Externally writable tapes: Input, incoming comm, subroutine output –Identity tape (identity “in software”) –Polytime in input length, minus length of inputs written to others ITMs can: – invoke other ITMs (specifying code and identity of invokee) –write to tapes of other ITMs (one write per activation). (subject to restrictions specified by a “control function”) Order of activations: An initial ITM is specified. ITM whose tape is last written to is activated next.

63 The protocol execution model: Highlights Environment: Invokes the adversary and as many parties as wants, gives inputs and identities, reads outputs. Parties: Run their code, send messages only to adversary. Adversary: Delivers arbitrary messages to parties. Notes: Captures asynchronous, unauthenticated comm. (To capture other comm. models, add structure) Party corruptions modeled as special messages from adv. Env gives one input to and reads one output from adv

64 The ideal process: Highlights Modeled as a special protocol within the general model: –All parties copy their inputs to a special ITM (“Ideal functionality”) F, –Copy the outputs from F to environment Can capture reactive tasks in a natural way (interactive F) F can interact directly with adversary, allowing for finer- grain specification of security properties: –Messages from Adv capture “allowed influence” –Messages to Adv captures “allowed info leakage” Pro: very expressive. Con: very expressive.

65 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. Note: V is assured that the value it received in step 2 was fixed in step 1. P is assured that V learns nothing about x before it is opened.

66 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. But, need to allow the adverary to delay outputs of V, otherwise the requirement is unrealistically strong...

67 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to adv. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt.

68 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. 3.Upon receiving a “corrupt C” from Adv, hand x to Adv.

69 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. But, what if the commiter gets corrupted?

70 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. 3.Upon receiving a “corrupt C” from Adv, hand x to Adv. Is that enough?

71 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. 3.Upon receiving a “corrupt C” from Adv, hand x to Adv. Is that enough? What if the committer gets corrupted immediately after it was invoked, and the adversary changes the committed bit?

72 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. 3.Upon receiving a “corrupt C” from Adv, hand x to Adv. If V didnt yet get “receipt” then allow Adv to change x. Is this enough?

73 Example: The commitment functionality 1.Upon receiving (“commit”,C,V,x) from party C, record x, and send (C, “receipt”) to V. When adv says “OK”, send (C, “receipt”) to V. 2.Upon receiving (“open”) from C, send (C,x) to V and halt. 3.Upon receiving a “corrupt C” from Adv, hand x to Adv. If V didnt yet get “receipt” then allow Adv to change x. Is this enough? Seems so...

74 [Yao86]: Can realize any two-party functionality for honest-but-curious parties. [GMW87]: Given authenticated communication, can realize any ideal functionality: –with any number of faults (without guarantee termination) –With up to n/2 faults (with guaranteed termination) [Ben-Or-Goldwasser-Wigderson88, Chaum-Crepeau-Damgard88]: Assuming secure channels, can do: –Unconditional security with n/3 faults –Adaptive security Many improvements and extensions [RB89,CFGN95,GRR97,…] General feasibility results (with respect to this definition)

75 Represent the code of the trusted party as a circuit. Evaluate the circuit gate by gate in a secure way against honest-but-curious faults. “Compile” the protocol to obtain resilience to malicious faults: –[GMW87]: Use general ZK proofs –[BGW88,CCD88]: Use special-purpose algebraic proofs The basic technique

76 Part II: Security under protocol composition

77

78

79

80

81

82

83 So far, we considered only a single protocol execution, in isolation. Does security in this setting imply security in a multi-execution setting? Is security preserved under protocol composition?

84 So far, we considered only a single protocol execution, in isolation. Does security in this setting imply security in a multi-execution setting? Why should we care? –We can always directly analyze the entire system... Is security preserved under protocol composition?

85 Modular design and analysis of protocols: –Break down a complex system into simpler chunks. –Analyze protocols for each of the chunks (as stand-alone). –Deduce the security of the original, composite system. Security in unknown environments: –Guarantee security when the protocol is running alongside potentially unknown protocols (that may even be designed later, depending on the analyzed protocol). Two benefits of security-preserving composition of protocols

86 Parallel composition of ZK protocols [Goldreich-Krawczyk88] : Assume the following gadget. The verifier V can generate “puzzles” such that: –The prover P can solve puzzles –V cannot distinguish a solution from a random element (even for puzzles that it generated). Can be constructed either assuming P is unbounded, or under computational assumptions [Feige89]. Security under composition: 1 st Example

87 Take a ZK protocol and add the following: –P sends a puzzle p to V –If V gives a solution s to p then reveal the secret witness. –Else, if V returns a puzzle p' then send a solution s' for p'. Parallel composition of ZK protocols

88 Take a ZK protocol and add the following: –P sends a puzzle p to V –If V gives a solution s to p then reveal the secret witness. –Else, if V returns a puzzle p' then send a solution s' for p'. If the original protocol is ZK, then so is the modified protocol: –V will never solve the challenge puzzle p –The solution s' can be simulated by a random string (Indeed, in a stand-alone setting, the addition is harmless.) Parallel composition of ZK protocols

89 Assume V interacts with P in two sessions concurrently. Then there is an attack: V obtains p1 from P in session 1 V gives p1 as its “p'” in session 2, gets a solution s1 V gives s1 to P in session 1, and obtains the witness... Conclusion: Running even two instances of the same protocol in parallel is not secure... Parallel composition of ZK protocols

90 2 nd example: Key exchange and secure channels

91 Authenticated Key Exchange The goal: Two parties want to generate a common, random and secret key over an untrusted network. The main use is to set up a secure communication session: Each message is encrypted and authenticated using the generated key. AB

92 The basic security requirements Key agreement: If two honest parties locally generate keys associated with each other then the keys are identical. Key secrecy: The key must be unknown to an adversary.

93 Encryption-based protocol [based on Needham-Schroeder-Lowe,78+95] AB ENC EB (N A,A, B) ENC EA (N A, N B,A, B) ENC EB (N B ) If decryption and identity Checks are ok then Choose a random k-bit N B and send (knows B’s encryption key EB)(knows A’s encryption key EA) If nonce check is ok then Output N B Choose a random k-bit N A If identity and nonce checks are ok then output N B and send

94 The protocol satisfies the requirements: Key agreement: If A, B locally output a key with each other, then this key must be N B. (Follows from the “untamperability” of the encryption.) Key secrecy: The adversary only sees encryptions of the key, thus the key remains secret. (Follows from the secrecy of the encryption.) (Indeed, the protocol securely realizes the KE functionality under the basic definition)

95 Attack against the protocol: AB ENC EB (N A,A, B) ENC EA (N A, N B,A, B) ENC EB (N B ) Assume that A uses the generated key to encrypt a buy/sell message M, using one-time-pad: N B +M

96 Attack against the protocol: AB ENC EB (N A,A, B) ENC EA (N A, N B,A, B) ENC EB (N B ) Assume that A uses the generated key to encrypt a buy/sell message M, using one-time-pad: C=N B +M ENC EB (C+ “sell”) Attacker knows that either C=N B + “sell”, or C=N B + “buy”. This can be checked: If B accepts the exchange then M= “sell”...

97 The problem: The adversary uses B as an “oracle” for whether it has the right key. But the weakness comes to play only in conjunction with another protocol (which gives the adversary two posible candidates for the key...)

98 Example III: Malleability of commitment [Dolev-Dwork-Naor91] A naive auction protocol using commitments: Phase 1: Each bidder publishes a commitment to its bid. Phase 2: Bidders open their commitments.

99 Example III: Malleability of commitments [Dolev-Dwork-Naor91] A naive auction protocol using commitments: Phase 1: Each bidder publishes a commitment to its bid. P1 P2 A C=Com(v) C’ Phase 2: Bidders open their commitments. P1 P2 A v v+1 Attack:

100 The problem: The stand-alone definition does not guarantee that the committed values in different instances are independent from each other. This is a whole new security concern, that does not exist in the stand-alone model...

101 Non-malleable commitments [DDN91] Guarantee “input independence” for commitments in the case where two instances of the same commitment protocol run concurrently.

102 Non-malleable commitments [DDN91] Guarantee “input independence” for commitments in the case where two instances of the same commitment protocol run concurrently. What about multiple instances? Different protocols? Seems hopeless: Given a commitment protocol C, define the protocol C': –To commit to x, run C on x-1. Now, all the attacker has to do is to claim it uses C' and copy the commitment and decommitment messages...

103 Ways to compose protocols: S

104 Timing coordination: –Sequential, Non-concurrent, Parallel, Concurrent Input coordination: –Same input, Fixed inputs, Adaptively chosen inputs Protocol coordination: –Self composition, General composition State coordination: –Indepdendent states, Shared state Number of instances: –Fixed, Bounded, Unbounded Ways to compose protocols: Salient parameters

105 Universal composition (Idea originates in [Micali-Rogaway91]) The idea: Generalize “subroutine substitution” of sequential algorithms to distributed protocols. Start with: Protocol that uses calls to Protocol Construct the composed protocol : Each call to is replaced with a call to. Each value returned from is treated as coming from.

106 Universal Composition (single subroutine call) 

107 ➔  ➔

108 Universal Composition (many subroutine calls) ➔

109 Can represent any of the composition scenarios discussed in the literature, by using the appropriate “calling protocol,”. ➔ Enough to consider a single composition operation... Why study universal composition?

110 What should be the security requirements under protocol composition?

111 Can have a list of properties to be preserved: –Correctness –Secrecy –Input independence –... ➔ But, again, how do we know we got it all... What should be the security requirements under protocol composition?

112 Recall the notion of protocol emulation: emulates if for any adversary A there exists an adversary S such that no environment E can tell if it runs with and A or with and S. Definition: emulates with -composable security if protocol emulates protocol. Goal: Find a definition that provides - composable security for as many protocols as possible. A proposed security requirement:

113 Intuitively, should be composable: It is explicitly required that no environment can tell the difference between running and running... As long as subroutine calls are non-concurrent, the intuition holds: What are the composability properties of basic security?

114 The non-concurrent composition theorem: [C. 00] If emulates and in no two protocol copies are running concurrently, then protocol emulates protocol. Corollary: If securely realizes functionality G then so does. (a similar composition theorem was proven in [Micali-Rogaway91] With respect to their notion, for info-theoretic funtion evaluation.)

115 Proof idea: –Given adv A against, construct an adv A against a single instance of. –Obtain a simulator S. –Construct an adversary that interacts with, given A and S. –Show validity by contradition...

116 We already saw counter examples: –Zero-Knowledge –Key Exchange –Commitment Other examples exist, even with information-theoretic security, and even with a single subroutine call... [Lindell-Lysyanskaya-Rabin02, Kushilevitz-Lindell- Rabin06] What about concurrent subroutine calls?

117 Part III: Universally Composable Security

118 Why isn't basic security preserved under concurrent composition? Recall the definition...

119 Basic security: E Ideal process: Protocol execution: P1P1 P3P3 P4P4 P2P2 A P1P1 P3P3 P4P4 P2P2 F S securely realizes F if: For any A there is an S s.t no environment E can tell if it interacts with: - A run with and A - A run with F and S E

120 Why isn't basic security preserved under concurent composition? The “information flow” between the extenal environment and the adversary is limited: –Initial input –Final output Instead, in concurrent executions there is often “circular adversarieal information flow” among executions: execution 1 -> execution 2 -> execution 1 these are not captured...

121 Universally Composable Security [C. 98,01] The main difference from the basic notion: The environment interacts with the adversary in an arbitrary way throughout the protocol execution. Also, add structure to the model to facilitate distinguishing among protocol instances in a system. (Add “a session identifier (SID)” as part of each ID.) Similar ideas appear in [Pfitzmann-Waidner00,01]

122 UC security: P1P1 P3P3 P4P4 P2P2 F P1P1 P3P3 P4P4 P2P2 S A E Ideal process: Protocol execution:

123 UC security: P1P1 P3P3 P4P4 P2P2 F P1P1 P3P3 P4P4 P2P2 S A E Protocol UC-realizes F if: For any adversary A There exists an adversary S Such that no environment E can tell whether it interacts with: - A run of with A - An ideal run with F and S Ideal process: Protocol execution:

124 The universal composition theorem: [C. 01] If UC-emulates then protocol UC-emulates protocol. Corollary: If UC-realizes functionality G then so does.

125 Proof idea: Same outline as the non-concurrent case: –Given adv A against, construct an adv A against a single instance of. –Obtain a simulator S. –Construct an adversary that interacts with, given A and multiple instances of S. –Show validity by contradition...

126 Implications of the UC theorem Modular protocol design and analysis Enabling sound formal and symbolic analysis Representing communication models as “helper” ideal functionalities

127 Questions: Are known protocols UC-secure? (Do these protocols realize the ideal functionalities that represent the corresponding tasks?) How to design UC-secure protocols? zcyk02]

128 Positive results: secure communication Can capture: –Message authentication –Entity authentication –Secure communication –Key exchange In a way that “accepts reasonable protocols” and matches (roughly) known definitions [C-Krawczyk*]. (MANY details swiped under the carpet...)

129 Positive results: Signatures and encryption Can also capture: –CCA-secure encryption –EU-CMA signatures

130 Positive results: General functionalities with honest majority Thm: If the corrupted parties are a minority then can realize any functionality. (e.g. use the protocols of [BenOr-Goldwasser-Wigderson88, Rabin-BenOr89,Canetti-Feige-Goldreich-Naor96] ).

131 What about two-party functionalities? Known protocols do not work. (“black-box simulation with rewinding” cannot be used). Many interesting functionalities (commitment, ZK, coin tossing, Oblivious Transfer, etc.) cannot be realized at all, even if authenticated communication is ideally given. Impossibility holds for large classes of functionalities [C-Kushilevitz-Lindell03,Datta etal06] Same for general multi-party computation without honest majority.

132 Theorem: There exists no two-party protocol that UC- realizes F com, even given authenticated communication. Proof Idea: Let P be a protocol that realizes F com in the plain model, and let S be an ideal-process adversary for P, for the case that the commiter is corrupted. Recall that S has to explicitly give the committed bit to F com before the opening phase begins. This means that S must be able to somehow “extract” the committed value b from the corrupted committer. However, in the UC framework S has no advantage over the real-life verifier. Thus, a corrupted verifier can essentialy run S and extract the committed bit b from an honest committer, before the opening phase begins, in contradiction to the secrecy of the commitment.

133 Is UC-security too strong? Can we have a weaker notion of security that would still meet our notion of composable security for any calling protocol, but will allow realization of, say, F com ? Note: In -composable security we only required that emulates, whereas here we get that UC-emulates.

134 Is UC-security too strong? Can we have a weaker notion of security that would still meet our notion of composable security for any calling protocol, but will allow realization of, say, F com ? Note: In -composable security we only required that emulates, whereas here we get that UC-emulates. But: Claim: Let and be protocols such that emulates for any protocol. Then UC-emulates. (Similar results in [Lindell 04].) So, if we relax UC-security, we either give up on composability or on basic security.

135 Getting around the impossibility... Main approaches: Relax the formulation of the functionality (good for specific tasks...) Add set-up assumptions Relax the notion of security

136 Adding set-up assumptions The idea: Assume that the parties have access to some initial information that is generated “in a trusted way”. (Quite common in cryptography, e.g. PKI for secure communication.) Formally, add to the model an ideal functionality that provides the appropriate service to the parties.

137 The common reference string set-up The idea: All parties have access to a string that is drawn from a given distribution “in a trusted way”. Formalization: Functionality F CRS that simply chooses a string c with the specified ditrribution and hands it to all parties and the adversary.

138 Feasibility in the CRS model Can realize commitment, ZK [C-Fischlin 01] Can have non-interactive ZK [C-Lindell-Ostrovsky-Sahai02] Can realize any functionality with any number of faults [CLOS02, based on GMW87]

139 Drawbacks of the CRS model Requires putting much trust in a single construct (entity) The modeling is problematic: Implicitly assumes that the CRS is used only by a single protocol instance. One way to resolve: Design protocols in a specific way so that multiple instances can use the same CRS (use a “UC with joint state” theorem [C- Rabin03] ). But what about composition with arbitrary other protocols that use the same CRS? (A good example: Deniability)

140 The CRS model: Alternative formulation To capture the fact that the CRS can be used by other protocols, let F CRS give the chosen string also directly to the environment. Now can demonstrate that deniability is not a problem... But: Thm: The impossibility result for commitment holds even in the (modified) CRS model... [C-dodis-Pass-Walfish06] Are we back to square 1?

141 An alternative set-up assumption: The “key registration model” Each party registers a (secret, public) key pair with a “PKI authority”. The authority makes public keys available to anyone, and keeps secret keys to itself. Can relax: –Parties can copy keys of others –Honest parties need not know their secret keys Trust can be “distributed”

142 Feasibility in the KR model: Can reproduce the general feasibility results [Barak-C-Nielsen-Pass04] Can show that the results holds even if the same setup is used by arbitrary other protocols [CDPW06]

143 Anther set-up assumption: Signature card [Hofheinz,Quade-Muller,Unruh06]

144 Relaxing the notion of security [Prabhakaran-Sahai04, Barak-Sahai05, Malkin-Moriarty- Yakovenko06,...] -The basic idea: Allow the adversary in the ideal model to run in super-poylnomial time. Problems: Composability may no longer hold... But: can fine-tune the extra computational power so that a “UC-style theorem” holds. Even basic security may no longer hold... But: For most interesting ideal functionalities, the ideal process guarantees security even when the adversary is unbounded... Still, a weaker requirement for the “environment”... time

145 Connections with formal methods for protocol analysis A popular method for analyzing cryptographic protocols using formal methods: Model the cryptographic operations (encryption, signature, etc) as “symbolic operations” that assume “perfect cryptography”. A quintessential example: The [Dolev-Yao81] modeling of public-key encryption. Many follow-ups exist.

146 The “Dolev-Yao style” formal modeling Main advantage: Analysis is much simpler. In particular, is amenable to automation (e.g., via tools for automated program verification). Main drawback: Lack of soundness. There is no security guarantee once the symbolic operations are replaced with real cryptograpic protocols.

147 Using UC analysis to obtain cryptographically sound formal modeling The main idea [PW00,C01,Backes-PW 03,C-Herzog04]: Formalize ideal functionalities that capture the primitives in use (e.g., encryption, signatures). Translate the ideal functionalities to symbolic protocol moves. Use the universal composition theorem to deduce that properties of the symbolic protocol are retained by the concrete protocol.

148 Analysis strategy Concrete protocol UC security Symbolic protocol Symbolic property

149 Analysis strategy (expanded) [CH04] Concrete protocol UC concrete security Symbolic single- instance protocol Symbolic property Single-instance Setting Security using UC encryption Security for multiple instances Ideal cryptography UC theorem Simplify UC w/ joint state

150 Summary Reviewed a basic and general notion of security for cryptographic protocols. Motivated the need for security notions that guarantee security under composition. Showed that the basic notion guarantees secure composability in the non-concurrent case, but not in the concurrent case. Reviewed a general notion that guarantees security- preserving concurrent composition, feasibility results and potential relaxations for that notion. Explored connections with formal analysis methods.

151 Further Research Find better notions of security for cryptographic protocols: –More relaxed, while guaranteeing the desired properties. –Easier to work with. Find better set-up assumptions Find better proof techniques: –Use modularity –Use automated tools Make security analysis of protocols ubiquitous.

152 Final Word Security analysis of protocols is only as good as the security notion used --- and subtleties abound. It is imperative to use a security notion that is appropriate for the relevant setting.

153


Download ppt "IPAM ’06 Tutorial Security and Composition of Cryptographic Protocols Ran Canetti IBM Research."

Similar presentations


Ads by Google