Foundations of Cryptography Lecture 2 Lecturer: Moni Naor
Recap of last week’s lecture Key idea of cryptography: use the intractability of some problems for the advantage of constructing secure system The identification problem Shannon Entropy and Min Entropy Good source on Information Theory: T. Cover and J. A. Thomas, Elements of Information Theory One-way functions
Are one-way functions essential to the two guards password problem? Precise definition: –for every probabilistic polynomial-time algorithm A controlling Eve and Charlie – every polynomial p(.), –and all sufficiently large n’s Y Prob[Bob moves Y | Alice does not approve] ≤ 1/p(n) Recall observation : what Bob and Charlie received in the setup phase might as well be public Claim: can get rid of interaction: –given an interactive identification protocol possible to construct a noninteractive one. In new protocol: Alice’ sends Bob’ the random bits Alice used to generate the setup information Bob’ simulates the conversation between Alice and Bob in original protocol and accepts only if simulated Bob accepts. Probability of cheating is the same
One-way functions are essential to the two guards password problem Are we done? Given a noninteracive identification protocol want to define a one-way function Define function f(r) as the mapping that Alice does in the setup phase between her random bits r and the information y given to Bob and Charlie Problem: the function f(r) is not necessarily one-way… –Can be unlikely ways to generate it. Can be exploited to invert. –Example: Alice chooses x, x’ {0,1} n if x’= 0 n set y=x o.w. set y=f(x) –The protocol is still secure, but with probability 1/2 n not complete –The resulting function f(x,x’) is easy to invert: given y {0,1} n set inverse as (y, 0 n )
One-way functions are essential to the two guards password problem… However: possible to estimate the probability that Bob accepts on a given string from Alice Second attempt: define function f(r) as –the mapping that Alice does in the setup phase between her random bits r and the information given to Bob and Charlie, –plus a bit indicating that probability of Bob accepts given r is greater than 2/3 Theorem : the two guards password problem has a solution if and only if one-way functions exist
Examples of One-way functions Examples of hard problems: Subset sum Discrete log Factoring (numbers, polynomials) into prime components How do we get a one-way function out of them? Easy problem
Subset Sum Subset sum problem: given –n numbers 0 ≤ a 1, a 2,…, a n ≤ 2 m –Target sum T –Find subset S ⊆ {1,...,n} ∑ i S a i,=T ( n,m )-subset sum assumption: for uniformly chosen –a 1, a 2,…, a n R {0,…2 m - 1} and S ⊆ {1,...,n} –For any probabilistic polynomial time algorithm, the probability of finding S’ ⊆ {1,...,n} such that ∑ i S a i = ∑ i S’ a i is negligible, where the probability is over the random choice of the a i ‘s, S and the inner coin flips of the algorithm Subset sum one-way function f:{0,1} mn+n → {0,1} m f(a 1, a 2,…, a n, b 1, b 2,…, b n ) = (a 1, a 2,…, a n, ∑ i=1 n b i a i mod 2 m )
Homework Show that if the subset sum assumption holds, then the subset sum function is one-way Show that the hardest case is when n=m –If there is some function g such that for m=g(n) the (n,g(n))- subset sum assumption holds, then the (n,n)- subset sum assumption holds Show a function f such that – if f is polynomial time invertable on all inputs, then P=NP –f is not one-way
Discrete Log Problem Let G be a group and g an element in G. Let y=g z and x the minimal non negative integer satisfying the equation. x is called the discrete log of y to base g. Example: y=g x mod p in the multiplicative group of Z p In general: easy to exponentiate via repeated squaring –Consider binary representation What about discrete log? –If difficult, f(g,x) = (g, g x ) is a one-way function
Integer Factoring Consider f(x,y) = x y Easy to compute Is it one-way? –No: if f(x,y) is even can set inverse as (f(x,y)/2,2) If factoring a number into prime factors is hard: –Specifically given N= P Q, the product of two random large (n-bit) primes, it is hard to factor –Then somewhat hard – there are a non-neglible fraction of such numbers ~ 1/n 2 from the density of primes –Hence a weak one-way function Alternatively: –let g(r) be a function mapping random bits into random primes. –The function f(r 1,r 2 ) = g(r 1 ) g(r 2 ) is one-way
Weak One-way function A function f: {0,1} n → {0,1} n is called a weak one-way function, if f is a polynomial-time computable function There exists a polynomial p(.), for every probabilistic polynomial-time algorithm A, and all sufficiently large n’s Prob[A[f(x)] f -1 (f(x)) ] ≤ 1-1/p(n) Where x is chosen uniformly in {0,1} n and the probability is also over the internal coin flips of A
Homework: weak exist if strong exists Show that if strong one-way functions exist, then there exists a a function which is a weak one-way function but not a strong one
What about the other direction? Given – a function f that is guaranteed to be a weak one-way Let p(n) be such that Prob[A[f(x)] f -1 (f(x)) ] ≤ 1-1/p(n) – can we construct a function g that is (strong) one-way? An instance of a hardness amplification problem Simple idea: repetition. For some polynomial q(n) define g(x 1, x 2,…, x q(n) )=f(x 1 ), f(x 2 ), …, f(x q(n) ) To invert g need to succeed in inverting f in all q(n) places –If q(n) = p 2 (n) seems unlikely (1-1/p(n)) p 2 (n) ≈ e -p(n) –But how to we show? Sequential repetition intuition – not a proof.
Want: Inverting g with low probability implies inverting f with high probability Given an machine A that inverts g want a machine A’ – operating in similar time bounds – inverts f with high probability Idea: given y=f(x) plug it in some place in g and generate the rest of the locations at random z=(y, f(x 2 ), …, f(x q(n) )) Ask machine A to invert g at point z Probability of success should be at least (exactly) A’s Probability of inverting g at a random point Once is not enough How to amplify? –Repeat while keeping y fixed –Put y at random position (or sort the inputs to g )
Proof of Amplification for Repetition of Two Concentrate on repetition of two g(x 1, x 2 )=f(x 1 ), f(x 2 ) Goal: show that the probability of inverting g is roughly squared the probability of inverting f just as would be sequentially Claim: Let α (n) be a function that for some p(n) satisfies 1/p(n) ≤ α (n) ≤ 1-1/p(n) Let ε (n) be any inverse polynomial function suppose that for every polynomial time A and sufficiently large n Prob[A[f(x)] f -1 (f(x)) ] ≤ α(n) Then for every polynomial time B and sufficiently large n Prob[B[g(x 1, x 2 )] g -1 (g(x 1, x 2 )) ] ≤ α 2 (n) + ε(n)
Proof of Amplification for Two Repetition Suppose not, then given a better than α 2 + ε algorithm B for g construct the following: A (y) Inversion algorithm for f –Repeat t times Choose x’ at random and compute y’=f(x’) Run B(y,y’). Check the results If correct Halt with success –Output failure Inner loop
Probability of Success Define S={y=f(x) | Prob[Inner loop successful| y ] > β } Since the choices of the x’ are independent Prob[ A succeeds| x S] > 1-(1- β ) t Taking t= n/ β means that when y S almost surely A will invert it Hence want to show that Prob[ y S] > α(n)
The success of B Fix the random bits of B. Define P={(y 1, y 2 )| B succeeds on (y 1,y 2 )} y1y1 y2y2 P P= P ⋂ {( y 1,y 2 )| y 1,y 2 S} ⋃ P ⋂ {( y 1,y 2 )| y 1 S} ⋃ P ⋂ {( y 1,y 2 )| y 2 S}
S is the only success.. But Prob[B[y 1, y 2 ] g -1 (y 1, y 2 ) | y 1 S ] ≤ β and similarly Prob[B[y 1, y 2 ] g -1 (y 1, y 2 ) | y 2 S ] ≤ β so Prob[(y 1, y 2 ) P and y 1,y 2 S ] ≥ Prob[(y 1, y 2 ) P ] - 2 β ≥ α 2 + ε - 2 β Setting β =ε/3 we have Prob[(y 1, y 2 ) P and y 1,y 2 S ] ≥ α 2 + ε/3
Contradiction But Prob[(y 1, y 2 ) P and y 1,y 2 S ] ≤ Prob[y 1 S ] Prob[y 2 S ] = Prob 2 [y S ] So Prob[y S ] ≥ √(α 2 + ε/3) > α