Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cryptography and Privacy Preserving Operations Lecture 2: Pseudo-randomness Lecturer: Moni Naor Weizmann Institute of Science.

Similar presentations


Presentation on theme: "Cryptography and Privacy Preserving Operations Lecture 2: Pseudo-randomness Lecturer: Moni Naor Weizmann Institute of Science."— Presentation transcript:

1 Cryptography and Privacy Preserving Operations Lecture 2: Pseudo-randomness Lecturer: Moni Naor Weizmann Institute of Science

2 Recap of Lecture 1 Key idea of cryptography: use computational intractability for your advantage One-way functions are necessary and sufficient to solve the two guard identification problem –Notion of Reduction between cryptographic primitives Amplification of weak one-way functions –Things are a bit more complex in the computational world (than in the information theoretic one) Encryption: easy when you share very long strings Started with the notion of pseudo-randomness

3 Is there an ultimate one-way function? If f 1 :{0,1} * → {0,1} * and f 2 :{0,1} * → {0,1} * are guaranteed to: –Be polynomial time computable –At least one of them is one-way. then can construct a function g:{0,1} * → {0,1} * which is one-way: g(x 1, x 2 )= (f 1 (x 1 ),f 2 (x 2 )) If an 5n 2 time one-way function is guaranteed to exist, can construct an O(n 2 log n) one-way function g : – Idea: enumerate Turing Machine and make sure they run 5n 2 steps g(x 1, x 2,…, x log (n) )=M 1 (x 1 ), M 2 (x 2 ), …, M log n (x log (n) ) If a one-way function is guaranteed to exist, then there exists a 5n 2 time one-way: – Idea: concentrate on the prefix 1/p(n)

4 Conclusions Be careful what you wish for Problem with resulting one-way function: –Cannot learn about behavior on large inputs from small inputs –Whole rational of considering asymptotic results is eroded Construction does not work for non-uniform one- way functions

5 The Encryption problem: Alice would want to send a message m  {0,1} n to Bob –Set-up phase is secret They want to prevent Eve from learning anything about the message Alice Bob Eve m

6 The encryption problem Relevant both in the shared key and in the public key setting Want to use many times Also add authentication… Other disruptions by Eve

7 What does `learn’ mean? If Eve has some knowledge of m should remain the same –Probability of guessing m Min entropy of m –Probability of guess whether m is m 0 or m 1 –Probability of computing some function f of m Ideally: the message sent is a independent of the message m –Implies all the above Shannon: achievable only if the entropy of the shared secret is at least as large as the message m entropy If no special knowledge about m –then |m| Achievable: one-time pad. –Let r  R {0,1} n –Think of r and m as elements in a group –To encrypt m send r+m –To decrypt z send m=z-r

8 Pseudo-random generators Would like to stretch a short secret (seed) into a long one The resulting long string should be usable in any case where a long string is needed –In particular: as a one-time pad Important notion: Indistinguishability Two probability distributions that cannot be distinguished –Statistical indistinguishability: distances between probability distributions –New notion: computational indistinguishability

9 Computational Indistinguishability Definition : two sequences of distributions {D n } and {D’ n } on {0,1} n are computationally indistinguishable if for every polynomial p(n) and sufficiently large n, for every probabilistic polynomial time adversary A that receives input y  {0,1} n and tries to decide whether y was generated by D n or D’ n |Prob[A=‘0’ | D n ] - Prob[A=‘0’ | D’ n ] | < 1/p(n) Without restriction on probabilistic polynomial tests: equivalent to variation distance being negligible ∑ β  {0,1} n |Prob[ D n = β] - Prob[ D’ n = β]| < 1/p(n)

10 Pseudo-random generators Definition : a function g:{0,1} * → {0,1}* is said to be a (cryptographic) pseudo- random generator if It is polynomial time computable It stretches the input g(x)|>|x| – denote by ℓ(n) the length of the output on inputs of length n If the input is random the output is indistinguishable from random For any probabilistic polynomial time adversary A that receives input y of length ℓ(n) and tries to decide whether y= g(x) or is a random string from {0,1} ℓ(n) for any polynomial p(n) and sufficiently large n |Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| y  R {0,1} ℓ(n) ] | < 1/p(n) Important issues: Why is the adversary bounded by polynomial time? Why is the indistinguishability not perfect?

11 Pseudo-random generators Definition : a function g:{0,1} * → {0,1}* is said to be a (cryptographic) pseudo-random generator if It is polynomial time computable It stretches the input |g(x)|>|x| – denote by ℓ(n) the length of the output on inputs of length n If the input (seed) is random, then the output is indistinguishable from random For any probabilistic polynomial time adversary A that receives input y of length ℓ(n) and tries to decide whether y= g(x) or is a random string from {0,1} ℓ(n) for any polynomial p(n) and sufficiently large n |Prob[A=`rand’| y=g(x)] - Prob[A=`rand’| y  R {0,1} ℓ(n) ] | < 1/p(n) Want to use the output a pseudo-random generator whenever long random strings are used Especially encryption – have not defined the desired properties yet. Anyone who considers arithmetical methods of producing random numbers is, of course, in a state of sin. J. von Neumann

12 Important issues Why is the adversary bounded by polynomial time? Why is the indistinguishability not perfect?

13 Construction of pseudo-random generators Idea: given a one-way function there is a hard decision problem hidden there If balanced enough: looks random Such a problem is a hardcore predicate Possibilities: –Last bit –First bit –Inner product

14 Hardcore Predicate Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * → {0,1} is a hardcore predicate for f if It is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and tries to compute h(x) for any polynomial p(n) and sufficiently large n |Prob[A(y)=h(x)] -1/2| < 1/p(n) where the probability is over the choice y and the random coins of A Sources of hardcoreness: –not enough information about x not of interest for generating pseudo-randomness –enough information about x but hard to compute it

15 Exercises Assume one-way functions exist Show that the last bit/first bit are not necessarily hardcore predicates Generalization: show that for any fixed function h:{0,1} * → {0,1} there is a one-way function f:{0,1} * → {0,1} * such that h is not a hardcore predicate of f Show a one-way function f such that given y=f(x) each input bit of x can be guessed with probability at least 3/4

16 Single bit expansion Let f:{0,1} n → {0,1} n be a one-way permutation Let h:{0,1} n → {0,1} be a hardcore predicate for f Consider g:{0,1} n → {0,1} n+1 where g(x)=(f(x), h(x)) Claim : g is a pseudo-random generator Proof: can use a distinguisher for g to guess h(x) f(x), h(x))f(x), 1-h(x))

17 Hardcore Predicate With Public Information Definition : let f:{0,1} * → {0,1}* be a function. We say that h:{0,1} * x {0,1} * → {0,1} is a hardcore predicate for f if h(x,r) is polynomial time computable For any probabilistic polynomial time adversary A that receives input y=f(x) and public randomness r and tries to compute h(x,r) for any polynomial p(n) and sufficiently large n |Prob[A(y,r)=h(x,r)] -1/2| < 1/p(n) where the probability is over the choice y of r and the random coins of A Alternative view : can think of the public randomness as modifying the one-way function f: f’(x,r)=f(x),r.

18 Example: weak hardcore predicate Let h(x,i)= x i I.e. h selects the i th bit of x For any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than 1-1/2n of computing h(x,i) Exercise : let c:{0,1} * → {0,1}* be a good error correcting code –|c(x)| is O(|x|) –distance between any two codewords c(x) and c(x’) is a constant fraction of |c(x)| It is possible to correct in polynomial time errors in a constant fraction of |c(x)| Show that for h(x,i)= c(x) i and any one-way function f, no polynomial time algorithm A(y,i) can have probability of success better than a constant of computing h(x,i)

19 Inner Product Hardcore bit The inner product bit: choose r  R {0,1} n let h(x,r) = r ∙x = ∑ x i r i mod 2 Theorem [Goldreich-Levin]: for any one-way function the inner product is a hardcore predicate Proof structure: Algorithm A’ for inverting f There are many x ’s for which A returns a correct answer ( r ∙x ) on ½+ε of the r ’s Reconstruction algorithm R: take an algorithm A that guesses h(x,r) correctly with probability ½+ε over the r ‘s and output a list of candidates for x –No use of the y info by R (except feeding to A) Choose from the list the/an x such that f(x)=y The main step!

20 Why list? Cannot have a unique answer! Suppose A has two candidates x and x’ –On query r it returns at `random’ either r ∙x or r ∙x’ Prob[A(y,r) = r ∙x ] =½ +½Prob[r∙x = r∙x’] = ¾

21 A : algorithm for guessing r ¢ x R: Reconstruction algorithm that outputs a list of candidates for x A’: algorithm for inverting f on a given y y,r 1 z 1 =r 1 ¢ x y x 1,x 2  x k A R ? z 2 =r 2 ¢ x A ? z k =r k ¢ x A ? y,r 2 y,r k z 1, z 2,  z k y Check whether f(x i )=y x i =x  A’

22 Warm-up (1) If A returns a correct answer on 1-1/2n of the r ’s Choose r 1, r 2, … r n  R {0,1} n Run A(y,r 1 ), A(y,r 2 ), … A(y,r n ) –Denote the response z 1, z 2, … z n If r 1, r 2, … r n are linearly independent then: there is a unique x satisfying r i ∙x = z i for all 1 ≤i ≤n Prob[z i = A(y,r i )= r i ∙x] ≥ 1-1/2n –Therefore probability that all the z i ‘s are correct is at least ½ –Do we need complete independence of the r i ‘s? `one-wise’ independence is sufficient Can choose r  R {0,1} n and set r i ∙ = r+e i e i =0 i-1 10 n-i All the r i `s are linearly independent Each one is uniform in {0,1} n

23 Warm-up (2) If A returns a correct answer on 3/4+ε of the r ’s Can amplify the probability of success! Given any r  {0,1} n Procedure A’(y,r): Repeat for j=1, 2, … –Choose r’  R {0,1} n –Run A(y,r+r’) and A(y,r’), denote the sum of responses by z j Output the majority of the z j ’s Analysis Pr[z j = r∙x] ≥ Pr[A(y,r’)=r∙x ^ A(y,r+r’)=(r+r’)∙x] ≥ ½+2ε –Does not work for ½+ε since success on r’ and r+r’ is not independent Each one of the events ‘ z j = r∙x’ is independent of the others Therefore by taking sufficiently many j ’s can amplify to a value as close to 1 as we wish –Need roughly 1/ε 2 examples Idea for improvement: fix a few of the r’

24 The real thing Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j For each position x i –for each S  {1,…,k} run A(y,e i -r S ) –output the majority value of {z s +A(y,e i -r S ) } Analysis: Each one of the vectors e i -r S is uniformly distributed –A(y,e i -r S ) is correct with probability at least ½+ε Claim: For every pair of nonempty subset S ≠ T  {1,…,k}: – the two vectors r S and r T are pair-wise independent Therefore variance is as in completely independent trials – I is the number of correct A(y,e i -r S ), VAR(I) ≤ 2 k (½+ε) –Use Chebyshev’s Inequality Pr[|I-E(I)|≥ λ√VAR(I)]≤1/λ 2 Need 2 k = n/ε 2 to get the probability of error to 1/n –So process is successful simultaneously for all positions x i, i  {1,…,n} ST

25 Analysis Number of invocations of A 2 k ∙ n ∙ (2 k -1) = poly(n, 1/ε) ≈ n 3 /ε 4 Size of resulting list of candidates for x for each guess of z 1, z 2, … z k unique x 2 k =poly(n, 1/ε) ) ≈ n/ε 2 Conclusion : single bit expansion of a one-way permutation is a pseudo- random generator guessespositions subsets xf(x)h(x,r) n n+1

26 Reducing the size of the list of candidates Idea: bootstrap Given any r  {0,1} n Procedure A’(y,r): Choose r 1, r 2, … r k  R {0,1} n Guess for j=1, 2, … k the value z j =r j ∙x –Go over all 2 k possibilities For all nonempty subsets S  {1,…,k} –Let r S = ∑ j  S r j –The implied guess for z S = ∑ j  S z j –for each S  {1,…,k} run A(y,r-r S ) output the majority value of {z s +A(y,r-r S ) For 2 k = 1/ε 2 the probability of error is, say, 1/8 Fix the same r 1, r 2, … r k for subsequent executions They are good for 7/8 of the r’s Run warm-up (2) Size of resulting list of candidates for x is ≈ 1/ε 2

27 Application: Diffie-Hellman The Diffie-Hellman assumption Let G be a group and g an element in G. Given g, a=g x and b=g y it is hard to find c=g xy for random x and y is probability of poly-time machine outputting g xy is negligible More accurately: a sequence of groups Don’t know how to verify given c’ whether it is equal to g xy Exercise: show that under the DH Assumption Given a=g x, b=g y and r  {0,1} n no polynomial time machine can guess r ∙g xy with advantage 1/poly – for random x,y and r

28 Application: if subset is one-way, then it is a pseudo-random generator Subset sum problem: given –n numbers 0 ≤ a 1, a 2,…, a n ≤ 2 m –Target sum y –Find subset S ⊆ {1,...,n} ∑ i  S a i,=y Subset sum one-way function f:{0,1} mn+n → {0,1} m+mn f(a 1, a 2,…, a n, x 1, x 2,…, x n ) = (a 1, a 2,…, a n, ∑ i=1 n x i a i mod 2 m ) If m<n then we get out less bits then we put in. If m>n then we get out more bits then we put in. Theorem : if for m>n subset sum is a one-way function, then it is also a pseudo-random generator

29 Subset Sum Generator Idea of proof: use the distinguisher A to compute r ∙x For simplicity: do the computation mod P for large prime P Given r  {0,1} n and (a 1, a 2,…, a n,y) Generate new problem (a’ 1, a’ 2,…, a’ n,y’) : Choose c  R Z P Let a’ i = a i if r i = 0 and a i =a i +c mod P if r i = 1 Guess k  R {o,…,n} - the value of ∑ x i r i –the number of locations where x and r are 1 Let y’ = y+c k mod P Run the distinguisher A on (a’ 1, a’ 2,…, a’ n,y’) –output what A says Xored with parity(k) Claim : if k is correct, then (a’ 1, a’ 2,…, a’ n,y’) is  R pseudo-random Claim : for any incorrect k, (a’ 1, a’ 2,…, a’ n,y’) is  R random y’= z + (k-h)c mod P where z = ∑ i=1 n x i a’ i mod P and h=∑ x i r i Therefore: probability to guess correctly r ∙x is 1/n∙(½+ε) + (n-1)/n (½)= ½+ε/n random pseudo- random Prob[A=‘0’|pseudo]= ½+ε Prob[A=‘0’|random]= ½ correct k incorrect k

30 Interpretations of the Goldreich-Levin Theorem A tool for constructing pseudo-random generators The main part of the proof: A mechanism for translating `general confusion’ into randomness –Diffie-Hellman example List decoding of Hadamard Codes –works in the other direction as well (for any code with good list decoding) –List decoding, as opposed to unique decoding, allows getting much closer to distance `Explains’ unique decoding when prediction was 3/4+ε Finding all linear functions agreeing with a function given in a black- box –Learning all Fourier coefficients larger than ε If the Fourier coefficients are concentrated on a small set – can find them –True for AC0 circuits –Decision Trees

31 Composing PRGs Composition Let g 1 be a (ℓ 1, ℓ 2 )- pseudo-random generator g 2 be a (ℓ 2, ℓ 3 )- pseudo-random generator Consider g(x) = g 2 (g 1 (x)) Claim : g is a (ℓ 1, ℓ 3 )- pseudo-random generator Proof: consider three distributions on {0,1} ℓ 3 –D 1 : y uniform in {0,1} ℓ 3 –D 2 : y=g(x) for x uniform in {0,1} ℓ 1 –D 3 : y=g 2 (z) for z uniform in {0,1} ℓ 2 By assumption there is a distinguisher A between D 1 and D 2 A must either distinguish between D 1 and D 3 - can use A use to distinguish g 2 or distinguish between D 2 and D 3 - can use A use to distinguish g 1 ℓ2ℓ2 ℓ1ℓ1 ℓ3ℓ3 triangle inequality

32 Composing PRGs When composing a generator secure against advantage ε 1 and a a generator secure against advantage ε 2 we get security against advantage ε 1 +ε 2 When composing the single bit expansion generator n times Loss in security is at most ε/n Hybrid argument: to prove that two distributions D and D’ are indistinguishable: suggest a collection of distributions D= D 0, D 1,… D k =D’ such that If D and D’ can be distinguished, there is a pair D i and D i+1 that can be distinguished. Difference ε between D and D’ means ε/k between some D i and D i+1 Use such a distinguisher to derive a contradiction

33 From single bit expansion to many bit expansion Can make r and f (m) (x) public –But not any other internal state Can make m as large as needed xf(x)h(x,r) Output Internal Configuration r f (2) (x) f (3) (x) Input h(f(x),r) h(f (2) (x),r) h(f (m-1) (x),r)f (m) (x)

34 Exercise Let {D n } and {D’ n } be two distributions that are –Computationally indistinguishable –Polynomial time samplable Suppose that {y 1,… y m } are all sampled according to {D n } or all are sampled according to {D’ n } Prove: no probabilistic polynomial time machine can tell, given {y 1,… y m }, whether they were sampled from {D n } or {D’ n }

35 Existence of PRGs What we have proved: Theorem : if pseudo-random generators stretching by a single bit exist, then pseudo-random generators stretching by any polynomial factor exist Theorem : if one-way permutations exist, then pseudo-random generators exist A harder theorem to prove Theorem [HILL] : if one-way functions exist, then pseudo- random generators exist Exercise : show that if pseudo-random generators exist, then one-way functions exist

36 Next-bit Test Definition : a function g:{0,1} * → {0,1}* is said to pass the next bit test if It is polynomial time computable It stretches the input |g(x)|>|x| – denote by ℓ(n) the length of the output on inputs of length n If the input (seed) is random, then the output passes the next-bit test For any prefix 0≤ i< ℓ(n), for any probabilistic polynomial time adversary A that receives the first i bits of y= g(x) and tries to guess the next bit, or any polynomial p(n) and sufficiently large n |Prob[A(y i,y 2,…, y i )= y i+1 ] – 1/2 | < 1/p(n) Theorem : a function g:{0,1} * → {0,1}* passes the next bit test if and only if it is a pseudo-random generator

37 Next- block Undpredictable Suppose that the function G maps a given a seed into a sequence of blocks let ℓ(n) be the length of the number of blocks given a seed of length n If the input (seed) is random, then the output passes the next-block unpredicatability test For any prefix 0≤ i< ℓ(n), for any probabilistic polynomial time adversary A that receives the first i blocks of y= g(x) and tries to guess the next block y i+1, for any polynomial p(n) and sufficiently large n |Prob[A(y 1,y 2,…, y i )= y i+1 ] | < 1/p(n) Exercise : show how to convert a next-block unpredictable generator into a pseudo-random generator. G:G: S y 1 y 2, …,

38 Pseudo-Random Generators concrete version G n :  0,1  m  0,1  n A cryptographically strong pseudo-random sequence generator - if passes all polynomial time statistical tests (t,  )- pseudo-random - no test A running in time t can distinguish with advantage 

39 Three Basic issues in cryptography Identification Authentication Encryption Solve in a shared key environment S S  

40 Identification - Remote login using pseudo-random sequence A and B share key S  0,1  k In order for A to identify itself to B Generate sequence G n (S) For each identification session - send next block of G n (S) G n (S) G: S

41 Problems... More than two parties Malicious adversaries - add noise Coordinating the location block number Better approach: Challenge-Response

42 Challenge-Response Protocol B selects a random location and sends to A A sends value at random location   What’s this?

43 Desired Properties Very long string - prevent repetitions Random access to the sequence Unpredictability - cannot guess the value at a random location –even after seeing values at many parts of the string to the adversary’s choice. –Pseudo-randomness implies unpredictability Not the other way around for blocks

44 Authenticating Messages A wants to send message M  0,1  n to B B should be confident that A is indeed the sender of M One-time application: S =(a,b) - where a,b  R  0,1  n To authenticate M : supply aM  b Computation is done in GF[2 n ]

45 Problems and Solutions Problems - same as for identification If a very long random string available - –can use for one-time authentication –Works even if only random looking a,b   Use this!

46 Encryption of Messages A wants to send message M  0,1  n to B only B should be able to learn M One-time application: S = a where a  R  0,1  n To encrypt M: send a  M

47 Encryption of Messages If a very long random looking string available - –can use as in one-time encryption   Use this!

48 Pseudo-random Functions Concrete Treatment: F:  0,1  k   0,1  n   0,1  m key Domain Range Denote Y= F S (X) A family of functions Φ k ={F S | S  0,1  k  is (t, , q)- pseudo-random if it is Efficiently computable - random access and...

49 (t, ,q)- pseudo-random The tester A that can choose adaptively –X 1 and get Y 1 = F S (X 1 ) –X 2 and get Y 2 = F S (X 2 )  … –X q and get Y q = F S (X q ) Then A has to decide whether – F S  R  Φ k  or – F S  R R n  m =  F | F :  0,1  n   0,1  m 

50 (t, ,q)- pseudo-random For a function F chosen at random from (1) Φ k ={F S | S  0,1  k  (2) R n  m =  F | F :  0,1  n   0,1  m  For all t -time machines A that choose q locations and try to distinguish (1) from (2)  Prob  A  ‘1’  F  R F k  - Prob  A  ‘1’  F  R R n  m    

51 Equivalent/Non-Equivalent Definitions Instead of next bit test: for X  X 1,X 2, , X q  chosen by A, decide whether given Y is –Y= F S (X) or –Y  R  0,1  m Adaptive vs. Non-adaptive Unpredictability vs. pseudo-randomness A pseudo-random sequence generator g:  0,1  m  0,1  n –a pseudo-random function on small domain  0,1  log n  0,1  with key in  0,1  m

52 Application to the basic issues in cryptography Solution using a shared key S Identification: B to A: X  R  0,1  n A to B: Y= F S (X) A verifies Authentication: A to B: Y= F S (M) replay attack Encryption: A chooses X  R  0,1  n A to B:

53 Goal Construct an ensemble {Φ k | k  L  such that for any {t k, 1/  k, q k | k  L  polynomial in k, for all but finitely many k’s Φ k is a (t k,  k, q k )- pseudo-random family

54 Construction Construction via Expansion –Expand n or m Direct constructions

55 Effects of Concatenation Given ℓ Functions F 1, F 2, , F ℓ decide whether they are –ℓ random and independent functions OR –F S 1, F S 2, , F S ℓ for S 1,S 2, , S ℓ  R  0,1  k Claim: If Φ k ={F S | S  0,1  k  is (t, ,q)- pseudo-random: cannot distinguish two cases –using q queries –in time t’=t - ℓ  q –with advantage better than ℓ 

56 Proof: Hybrid Argument i=0 F S 1, F S 2, , F S ℓ p 0 … i R 1, R 2, , R i-1,F S i, F S i+1, , F S ℓ p i … i=ℓ R 1, R 2, , R ℓ p ℓ  p ℓ - p 0     i  p i+1 - p i   /ℓ

57 ...Hybrid Argument Can use this i to distinguish whether – F S  R  Φ k  or F S  R R n  m Generate F S i+1, , F S ℓ Answer queries to first i-1 functions at random (consistently) Answer query to F S i, using (black box) input Answer queries to functions i+1 through ℓ with F S i+1, , F S ℓ Running time of test - t’  ℓ  q

58 Doubling the domain Suppose F (n) :  0,1  k   0,1  n   0,1  m which is (t, ,q)- p.r. Want F (n+1) :  0,1  k   0,1  n+1   0,1  m which is (t’,  ’,q’)- p.r. Use G:  0,1  k   0,1  2k which is (t,  ) p.r G(S)  G 0 (S) G 1 (S) Let F S (n+1) (bx)  F G b (s) (n) (x)

59 Claim If G is (t  q,  1 ) -p.r and F (n)  is (t  2q,  2,q) -p.r, then F (n+1)  is (t,  1  2  2,q) -p.r Proof: three distributions (1) F (n+1) (2) F S 0 (n), F S 1 (n) for independent S 0, S 1 (3) Random D   1  2  2

60 ...Proof Given that (1) and (3) can be distinguished with advantage  1  2  2, then either (1) and (2) with advantage  1 –G can be distinguished with advantage  1 or (2) and (3) with advantage 2  2 –F (n)  can be distinguished with advantage  2 Running time of test - t’  q

61 Getting from G to F (n) Idea: Use recursive construction F S (n) (b n b n-1  b 1 )  F G b 1 (s) (n-1) (b n-1 b n-2  b 1 )  G b n (G b n-1 (  G b 1 (S))  ) Each evaluation of F S (n) (x) : n invocations of G

62 Tree Description G 0 (S) G 1 (S) S G 0 (G 0 (S)) G 1 (G 0 (G 0 (S))) Each leaf corresponds to an X. Label on leaf – value of pseudo- random function

63 Security claim If G is (t  qn,  ) p.r, then F (n)  is (t,  ’  n  q ,q) p.r Proof: Hybrid argument by levels D i : – truly random labels for nodes at level i. – Pseudo-random from i down Each D i - a collection of q functions  i  p i+1 - p i   ’/n  q 

64 Hybrid S0S0 S1S1 ?S?S G 0 (S 0 ) G 1 (G 0 (S 0 )) n-i i DiDi

65 …Proof of Security Can use this i to distinguish concatenation of q sequence generators G from random. The concatenation is (t,q  ) p.r Therefore the construction is (t, ,q) p.r

66 Disadvantages Expensive - n invocations of G Sequential Deterioration of  But does the job! From any pseudo-random sequence generator construct a pseudo-random function. Theorem: one-way functions exist if and only if pseudo- random functions exist.

67 Applications of Pseudo-random Functions Learning Theory - lower bounds –Cannot PAC learn any class containing pseudo-random function Complexity Theory - impossibility of natural proofs for separating classes. Any setting where huge shared random string is useful Caveat: what happens when the seed is made public?

68 References Blum-Micali : SIAM J. Computing 1984 Yao: Blum, Blum, Shub: SIAM J. Computing, 1988 Goldreich, Goldwasser and Micali: J. of the ACM, 1986

69 ...References Books: O. Goldreich, Foundations of Cryptography - a book in three volumes. – Vol 1, Basic Tools, Cambridge, 2001 Pseudo-randomness, zero-knowledge –Vol 2, about to come out (Encryption, Secure Function Evaluation) –Other volumes in www.wisdom.weizmann.ac.il/~oded/books.html M. Luby, Pseudorandomness and Cryptographic Applications, Princeton University Pres,

70 References Web material/courses S. Goldwasser and M. Bellare, Lecture Notes on Cryptography, http://www-cse.ucsd.edu/~mihir/papers/gb.html Wagner/Trevisan, Berkeley www.cs.berkeley.edu/~daw/cs276 Ivan Damgard and Ronald Cramer, Cryptologic Protocol Theory http://www.daimi.au.dk/~ivan/CPT.html Salil Vadhan, Pseudorandomness –http://www.courses.fas.harvard.edu/~cs225/Lectures-2002/http://www.courses.fas.harvard.edu/~cs225/Lectures-2002/ Naor, Foundations of Cryptography and Estonian Course –www.wisdom.weizmann.ac.il/~naorwww.wisdom.weizmann.ac.il/~naor


Download ppt "Cryptography and Privacy Preserving Operations Lecture 2: Pseudo-randomness Lecturer: Moni Naor Weizmann Institute of Science."

Similar presentations


Ads by Google