Download presentation
Presentation is loading. Please wait.
Published byDulcie Skinner Modified over 9 years ago
1
On necessary and sufficient cryptographic assumptions: the case of memory checking Lecture 4 : Lower bound on Memory Checking, Lecturer: Moni Naor Weizmann Institute of Science Web site of lectures : www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html
2
Recap of Lecture 3 The memory checking problem –The online vs. offline versions –The relationship the Sub-linear authentication problem –A good offline protocol based on hash function that can be computed on the fly Small biased probability space Hash functions for set equality –A good computational solution for the online problem, assuming one-way functions Two solutions, both tree based Using Pseudo-random tags Using families of UOWHF –Small memory need be only reliable The Consecutive Message Protocol model –Tight (sqrt(n)) bound for equality t(n) ¢ s(n) is (n) –Similar to the simultaneous model –But: sublinear protocols exist iff one-way functions exist
3
The lecture Learning Distributions –Static and adaptive case Lower bounds on memory checking –Existence of sublinear protocols implies one-way functions
4
Learning Distributions We are Given many samples from a distribution D w 1, w 2, … w m Would like to `learn’ D –What does that mean? –Large parts of statistics are devoted to this question… In Computational Learning Theory two notions exist: –Learn by generation should come up with a probabilistic circuit The output has distribution D provided the inputs are random –Approximation possible –Learn by evaluation Given x can compute (approximate) Pr D [x] Distributed ~ D
5
Learning Distributions Suppose D is determined by a string k of s `secret’ bits Everything else is known If one-way functions exist: there are circuits C where it is computationally hard to learn the output distribution Let F k be a pseudo-random function C ’s output is x ◦ F K (x) for a random x k C
6
Learning Adaptively Changing Distributions Learning to predict and imitate the distribution of probabilistic, adaptively changing processes. E.g.: the T-1000 Robot can: “imitate anything it touches … anything it samples”
7
Examples of adaptively changing distributions Impersonation –Alice and Bob agree on a secret and engage in a sequence of identification sessions. –Eve want to learn to imitate one (or both) of the parties –How long should she wait Classification of a changing bacteria –How long must the scientist observe before making the classification Catching up: Sam and Sally are listening to a visiting professor’s lecture. Sam falls asleep for a while –How long would it take for Sam to catch up with Sally
8
Learning Adaptively Changing Distributions What happens if the generating circuit C changes in time and as a reaction to events and the environment Secret state Public state Transition function D: S p x S s x R S p x S s Size of secret and public state are not restricted But size of initial secret is restricted to s bits. How long would it take us to learn the distribution the next public state given the sequence of past public states First answer: may be impossible to learn Example: the next public state may be the current secret state The current secret state chosen at random So we want to be competitive with a party that knows the initial secret state Secret state chosen at random
9
Definition of Learning an Adaptively Changing Distribution Let D be an adaptively changing distribution (ACD) D: S p x S s x R S p x S s Then given public states P 0, P 1,... P k and the initial secret s 0 there is the induced distribution D k on the next public state Definition: A learning algorithm ( , ) -learns the ACD if It always halts and outputs an hypothesis h With probability and least 1- we have that (D k, h) · probability is over the random secret and the randomness in the evolution of the state
10
Algorithm for learning ACDs Theorem : For any ε, δ > 0, for any ACD there is an algorithm that activates the system for O(s) rounds ( , ) -learns the ACD Repeat until success (or give up) If there is a very high weight subset of initial secret states A whose distributions are close: Close = distance less than ε High weight = 1- /2 Then can pick any h 2 A Else activate the ACD and obtain the next public state Claim : if the algorithm terminates in the loop, then with probability at least 1- /2 (D k, h) · Conditioned on the public states seen so far
11
Analysis Main parameter for showing that the algorithm advances: Entropy of the initial secret Key lemma: If the high weight condition does not hold, then the expected entropy drop of the initial secret is high –At least 2 / After O(s) iterations not much entropy left Constant depends on and The (Shannon) entropy of X is H(X) = - ∑ x Γ P x (x) log P x (x)
12
Efficiency of the Algorithm Would like to be able to learn all ACD where D is an efficiently computable function Theorem : One-way functions exist iff there is an efficiently computable ACD D and some ε, δ > 0, for which it is (ε, δ)-hard to learn D
13
Connection to memory checking and authentication: learning the access pattern distribution Corollary from ACD Learning Theorem : For any ε, δ > 0, for any x, when: –E is activated on x, secret output s x –Adversary activates V at most O(s(n)/ε 2 δ 2 ) times –Adversary learns secret encoding s L. px :The final public encoding reached. D p (s) :Access pattern distribution in the next activation on public p Randomness over activations of E, V. Guarantee : With probability at least 1–δ, the distributions D px (s x ) and D px (s L ) are ε -close (statistically).
14
Memory Checkers How to check a large and unreliable memory Store and retrieve requests to large adversarial memory –a vector in {0,1} n Detects whether the answer to any retrieve was different than the last store Uses small, secret, reliable memory: space complexity s(n) Makes its own store and retrieve requests: query complexity t(n) C memory checker U user P public memory S secret memory s(n) bits t(n) bits
15
Computational assumptions and memory checkers For offline memory checkers no computational assumptions are needed: Probability of detecting errors: ε Query complexity: t(n)=O(1) (amortized) Space complexity: s(n)=O(log n + log 1/ ε ) For online memory checkers with computational assumptions, good schemes are known: Query complexity t(n)=O(log n) Space complexity s(n)=n (for any > 0) Probability of not detecting errors: negligible Main result: computational assumptions are necessary for sublinear schemes
16
Recall: Memory Checker Authenticator If there exists an online memory checker with – space complexity s(n) – query complexity t(n) then there exists an authenticator with – space complexity O(s(n)) – query complexity O(t(n)) Strategy in showing lower bound for memory checking: show it on authenticator
17
The Lower Bound Theorem 1 [Tight lower bound]: For any online Memory Checker (Authenticator) secure against a computational unbounded adversary s(n) x t(n) is (n)
18
Memory Checkers and One-Way Functions Breaking the lower bound implies one-way functions Theorem 2 : If there exists an online memory checker (authenticator) –Working in polynomial time –Secure against polynomial time adversaries –With query and space complexity: s(n) x t(n) 0 ) then there exist functions that are hard to invert for infinitely many input lengths (“almost one-way” functions)
19
Program for showing the lower bound: Prove lower bound: –First a simple case By a reduction to the consecutive message model –Then the generalized reduction
20
Simultaneous Messages Protocols For the equality function: –|m A | x |m B | = (n) [Babai Kimmel 1997] mAmA mBmB f(x,y) x {0,1} n y {0,1} n ALICE BOB CAROL
21
Consecutive Messages Protocols Theorem For any CM protocol that computes the equality function, If |m P | ≤ n/100 then |m A | x |m B | = (n) f(x,y) x {0,1} n y {0,1} n ALICE CAROL BOB mAmA mPmP mBmB s(n) t(n) rprp
22
The Reduction Idea: Use an authenticator to construct a CM protocol for equality testing
23
public encoding p x Recall: Authenticators How to authenticate a large and unreliable memory with a small and secret memory E secret encoding s x V D public encoding p y accept reject x {0,1} n x y s(n) bits t(n) bits s x =E secret (x, r) p x = E public (x,r)
24
A Simple(r) Construction Simplifying Assumption: V chooses which indices of the public encoding to access independently of the secret encoding In particular: adversary knows the access pattern distribution
25
public encoding p x E secret encoding s x V D public encoding p y accept reject x {0,1} n x y bits accept x {0,1} n y {0,1} n ALICE CAROL BO B x {0,1} n reject sxsx
26
To show it works Must show When x=y then the CM protocol accepts –Otherwise the authenticator will reject when no changes were made How to translate – an adversary to the CM protocol that makes it accept when x≠y –into an adversary that cheats the verifier
27
Why It Works (1) Claim: If (E,D,V) is an authenticator then the CM protocol is good. Correctness when x=y : Alice and Bob should use same public encoding of x. To do this, use r pub use it as the randomness for the encoding
28
Security: suppose adversary for CM protocol breaks its Makes Carol accept when x≠y Want to show: can break the authentication as well Tricky: “CM adversary” sees r pub ! –Might leek information since s x is chosen as E secret (x, r pub ) Solution: For s x Alice selects different real randomness giving the same public encoding! –Choose r’ 2 R E public -1 (x, r pub ) –Let s x = E secret (x, r’) Exactly the same information is available to the authenticator adversary in a regular execution –The public encoding p x = E public (x, r) –Hence: probability of cheating is the same Conclusion: s(n) x t(n) is (n) Why It Works (2) Rerandomizing
29
The Rerandomizing Technique Always choose `at random’ the random coins consistent with the information you have
30
Why it doesn’t work always What if the verifier uses the secret encoding to determine its access pattern distribution ? The simple lower bound applies tor “one-time” authenticators. –Where the adversary sees only a single verification Is this true without simplifying assumption?
31
“One-Time Authenticators” Space complexity: O(log(n)), Query Complexity: O(1) Lesson: use the fact that V is secure when run many times. E V D accept x x {0,1} n public encoding E(x) secret encoding
32
Progess: Prove lower bounds: –First a simple case –Then the generalized reduction A discussion on one-way functions
33
Authenticators: Access Pattern Access Pattern: Indices accessed by V and bit values read. Access Pattern Distribution: distribution of the access pattern in V ’s next activation, given: –E ’s initial secret string –Past access patterns Randomness: over V ’s coin tosses in all its activations. Want to be able to state: The adversary knows the access pattern distribution Even though he can’t see E ’s secret output.
34
Access Pattern Distribution E secret s x V D accept reject x {0,1} n public secret public secret V public x D x D xy public p y randomness given
35
Learning the Access Pattern Distribution Important Lesson: if adversary doesn’t know the access pattern distribution, then V is “home free”. In “one-time” example –V exposes the secret indices! Lesson: –activate V many times, “learn” its distribution! Recall: learning adaptively changing distributions.
36
Connection to memory checking and authentication: learning the access pattern distribution Corollary from ACD Learning Theorem : For any ε, δ > 0, for any x, when: –E is activated on x, secret output s x –Adversary activates V at most O(s(n)/ε 2 δ 2 ) times –Adversary learns secret encoding s L. px :The final public encoding reached. D p (s) :Access pattern distribution in the next activation on public p Randomness over activations of E, V. Guarantee : With probability at least 1–δ, the distributions D px (s x ) and D px (s L ) are ε -close (statistically).
37
public encoding p x E secret encoding s x V D accept x {0,1} n bits x {0,1} n ALICE CAROL BOB sxsx a, s L r pub x sLsL x {0,1} n accept
38
Sampling by s L, simulating by s x Access pattern distributions by s L and s x are ε-close: Bob generates access pattern a using s L Carol selects a random string r from those that give a on secret input s x –Rerandomziation Simulate V using the random string r Claim: the distribution of r is ε-close to uniform
39
Does it Work? Security? The adversary sees s L ! Not a problem: could have learned this on its own What about y ≠ x?
40
Recap (1) Adversary wants to know access pattern distribution 1.Can learn access pattern distribution 2.Saw protocol that accepts when y=x 3.What about y ≠ x?
41
public encoding p x E secret encoding s x V D public encoding p y ? x {0,1} n y bits x {0,1} n y {0,1} n ALICE CAROL BOB ? sxsx a, s L r pub x sLsL
42
Does it Work? (2) Will this also work when y≠x ? No! Big problem for the adversary: – it can learn access pattern distribution on correct and unmodified public encoding… – really wants the distribution on different modified encoding! Distributions by s x and s L may be: – very close on unmodified encoding ( p x ) – very far on any other (e.g. p y ) Can’t hope to learn distribution on modified public encoding –Not enough information/iterations
43
Back to The Terminator: TERMINATOR: What's the dog's name? JOHN: Max. TERMINATOR: Hey, Janelle, what's wrong with Wolfy? I can hear him barking. Is he okay? T-1000 impersonating Janelle: Wolfy's fine, honey. Where are you? Terminator hangs up: Your foster parents are dead. Let's go.
44
Recap (2) Adversary wants to know access pattern distribution 1.Can learn access pattern distribution 2.Saw protocol that accepts when y=x 3.What about y ≠ x? 4.Big problem: can’t “learn” the access pattern distribution in this case!
45
Bait and Switch (1) Intuition: If Carol knows s x and s L, and they give different distributions, then she can reject. Concrete idea: Bob always uses s L to determine the access pattern, – Carol will check whether the distributions are close or far. This is a “weakening” of the verifier. We need to show it is still secure.
46
Bait and Switch (2) Give Carol access to s x and to s L Also give her the previous access patterns ( a ) Bob got public encoding p Recall D p (s x ) and D p (s L ) : –Access pattern distributions on public encoding p with s x and s L as initial private encodings
47
Access Pattern Distribution E secret s x V D x {0,1} n public secret public secret V public x D x public p randomness given
48
Bait and Switch (3) If only Carol could compute D p (s x ) and D p (s L ) … Check whether they are ε-close: If far, then p cannot be the “real” public encoding! Reject If they are close, then: –use s L to determine access pattern –simulate V with s x and that access pattern
49
Bait and Switch (4) Last problem: –V’ cannot compute the distributions D p (s x ) and D p (s L ) without reading all of p ( V may be adaptive). Observation: –V’ can compute the probability of any access pattern for which all the bits read from p are known. Solution: –Sample O(1) access patterns by D p (s L ), use them to approximate the distance between the distributions. The only type of operation we have that is not random inverse
50
public encoding p x E secret encoding s x V D public encoding p y reject x {0,1} n y bits x {0,1} n y {0,1} n ALICE CAROL BOB sxsx a, s L r pub x sLsL reject close far reject accept x {0,1} n accept
51
Analysis of Protocol If the public encoding does not change, the distributions will be ε-close w.h.p. When Carol simulates V, she accepts w.h.p. If decoding x from public encoding is impossible, then there are two cases: –If the distributions are far: Carol will run approximate distance test, reject w.h.p. –If the distributions are close: When Carol simulates V, she rejects w.h.p.
52
Recap (3) Adversary wants to know access pattern distribution 1.Can learn access pattern distribution 2.Saw protocol that accepts when y=x 3.What about y ≠ x? 4.Big problem: can’t “learn” the access pattern distribution in this case! 5.Solution: Bait and Switch
53
Program for This Talk: Define authenticators and online memory checkers Review some past results Define communication complexity model(s) Prove lower bounds: –First a simple case –Then the generalized reduction A discussion on one-way functions
54
Recall: One-Way Functions A function f is one-way if: –it is computable in poly-time –the probability of successfully finding an inverse in poly-time is negligible (on a random input) A function f is distributionally one-way if: –it is computable in poly-time –No poly-time algorithm can successfully find a random inverse (on a random input) Theorem [Impagliazzo Luby 89]: Distributionally one-way functions exist one-way function exists
55
Authenticator One-Way Function Recall Theorem 2 Two steps: If there are no one-way functions: build an explicit efficient adversary “fooling” any CM protocol with poly-time Alice and Bob that breaks the lower bound If there are no one-way functions, then modify the reduction and make Alice and Bob run in poly-time Together: a contradiction!
56
Recall: Sublinear CM protocols imply one- way function Theorem : a CM protocol for equality where –all parties are polynomial time –t(n) ¢ s(n) 2 o(n) and |m p | 2 o(n) exists iff one-way function exist Proof : Consider the function f f(x,r A,r p,r B 1,r B 2, …, r B k ) = (r p,m p,r B 1,r B 2, …, r B k,m B 1,m B 2, …, m B k ) Where M pm p = M p (x,r A,r P ) m B i =M B (x,r B i,m p,r P ) f Main Lemma: the function f is distributionally one-way M p is the function that maps Alice’s input to the public message m p M B is the function that maps Bob’s input to the private message m B
57
CM protocol implies one-way functions Adversary selects a random x for Alice Alice sends public information m p, r pub Adversary generates a multiset T x of s(n) Bob-messages Claim : W.h.p., for every Alice message, T x approximates Carol’s output Adversary randomly inverts the function f and w.h.p. finds x’ x s.t. T x characterizes Carol when Bob’s input is both x and x’ Why? T x is of length much smaller than n since s(n) ¢ t(n) + |m p | is not too large! Since on x and x’ where x’ x Carol’s behavior is similar in both cases, the protocol cannot have high success probability
58
Running Alice and Bob in Poly-Time If we can randomly invert any efficiently computable function, then can run Alice and Bob in poly-time Need the tight ACD learning result Theorem: If one-way functions don’t exist then can learn ACDs efficiently with few samples Interesting Point: We don’t make Carol efficient (nor do we need to)
59
Conclusion Settled the complexity of online memory checking Characterized the computational assumptions required for good online memory checkers Open Questions: Do we need logarithmic query complexity for online memory checking? Showing one-way functions are essential for cryptographic tasks Equivalence of the distance of distributions and oen-way functions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.