Download presentation
Presentation is loading. Please wait.
Published byClifton Ward Modified over 8 years ago
2
RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, many more …
3
Randomness In Computation (1) Distributed computing (breaking symmetry) Cryptography: Secrets, Semantic Security, … Sampling, Simulations, …
4
Randomness In Computation (2) Communication Complexity (e.g., equality) Routing (on the cube [Valiant]) - drastically reduces congestion
5
Randomness In Computation (3) In algorithms – useful design tool, but many times can derandomize (e.g., PRIMES in P). Is it always the case? RL=L means that every randomized algorithm can be derandomized with only a constant factor increase in memory
6
Talk’s Premise: Many Frontiers of RL=L RL=L RL in L 3/2 And Beyond Barriers of previous proofs wealth of excellent research problems.
7
RL (NL ) L 2 [Savitch 70] 0 0 1 1 1 1 1 0 0 0 poly(|x|) configs transitions on current random bit duplicate (running time T) ≤ poly(|x|) times s = start config t = accept config Configuration graph (per RL algorithm for P & input x): x P random walk from s ends at t w.p. ≥ ½ x P t unreachable from s Enumerating all possible paths – too expensive. Main idea: 1st half of computation only transmits log n bits to 2nd half
8
Oblivious Derandomiztion of RL Pseudorandom generators that fool space-bounded algorithms [AKS 87, BNS 89, Nisan 90, NZ 93, INW 94 ] Nisan’s generator has seed length log 2 n Proof that RL in L 2 via oblivious derandomization Major tool in the study of RL vs. L Applications beyond [Ind 00, Siv 02, KNO 05,…] Open problem: PRGs with reduced seed length
9
Randomness Extractors @ Your Service Basic idea [NZ93] (related to Nisan’s generator): Let log -space A read a random 100 logn bit string x. Since A remembers at most logn bits, x still contains (roughly) 99 logn bits of entropy (independent of A ’s state). Can recycle x : G x,y x, Ext(x,y)
10
Randomness Extractors @ Your Service NZ generator: Possible setting of parameters: x is O(log n) long. Each y i is O(log ½ n) long and have log ½ n y i ’s. Expand O(log n) bits to O(log 3/2 n) (get any poly) Error >> 1/n ([AKS87] gets almost log 2 n bits w. error 1/n) G x,y 1,y 2, … x, Ext(x,y 1 ), Ext(x,y 2 ),
11
Randomness Extractors @ Your Service NZ generator: Error >> 1/n ([AKS87] gets almost log 2 n bits w. error 1/n) Open: get any polynomial expansion w. error 1/n Open: super polynomial expansion with logarithmic seed and constant error (partial result [RR99]). G x,y 1,y 2, … x, Ext(x,y 1 ), Ext(x,y 2 ),
12
Nisan,INW Generators via Extractors Recall basic generator: Lets flip it … G x,y x, Ext(x,y)
13
Nisan,INW Generators via Extractors x,y xExt(x,y) Given state of machine in the middle, Ext(x,y) still -random log n Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ = /n Altogether: seed length = log 2 n
14
Nisan,INW + NZ RL=L Let M be an RL machine Using [Nisan] get M’ that uses only log 2 n random bits Fully derandomize M’ using [NZ] Or does it? M’ is not an RL machine (access to seed of [Nisan, INW] not read once) Still, natural approach – derandomize seed of [Nisan] Can we build PRGs from read once ingredients? Not too promising …
15
RL L 3/2 [SZ95] - “derandomized” [Nis] Nisan’s generator has following properties: Seed divided into h (length log 2 n ) and x (length logn ). Given h in input tape, generator runs in L. M, w.h.p over h, fixing h and ranging over x implies a good generator for M. h is shorter if we generate less than n bits
16
[SZ95] - basic idea Fix h, divide run of M to segments: Enumerate over x, estimate all transition probs. Replace each segment with a single transition Recurse using the same h Now M’ depends on h M’ close to some t-power of M. [SZ] perturb M’ to eliminate dependency on h
17
[SZ95] –further progress Open: Translate [SZ] to a better generator against space bounded algorithms! Potentially, can then recursively apply [SZ] and get better derandomization of RL (after constant number of iterations may get RL in L 1+ ) Armoni showed an interesting extrapolation between [NZ] and [INW] and as a result got a slight improvement (RL in L 3/2 /(log L) 1/2 )
18
Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ = /n Avoiding loss due to entropy in state: [RR99] Recycle the entropy of the states. Challenge: how to do it when do not know state probabilities? Open: better PRGs against constant width branching programs Even for combinatorial rectangles we do not know “optimal” PRGs
19
Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ = /n Avoiding loss due to extractor seeds: Can we recycle y from previous computation? Challenge: contain dependencies … Do we need a seed at all? Use seedless extractors instead? x Ext(x,y(x))
20
Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ = /n Extractor seed is long because we need to work with small error έ = /n Error reduction for PRGs? If use error έ = /(log n) sequence still has some unpredictability property, is it usable? (Yes for SL [R04,RozVad05]!)
21
Final Comment on Improving INW Perhaps instead on reducing the loss per level we should reduce the number of levels? This means that at each level the number of pseudorandom strings we have should increase more rapidly (e.g., quadraticaly). Specific approach based on ideas from Cryptography (constructions of PRFs based on PR Synthesizers [NR]), more complicated to apply here.
22
Its all About Graph Connectivity Directed Connectivity captures NL Undirected Connectivity is in L [R04]. Oblivious derandomization: pseudo-converging walks for consistently labelled regular digraphs [R04,RTV05] Where is RL on this scale? Connectivity for digraphs w/polynomial mixing time [RTV05] Outgoing edges have labels. Consistent labelling means that each label forms a permutation on vertices A walk on consistently labelled graph cannot lose entropy
23
Connectivity for undirected graphs [R04] Connectivity for regular digraphs [RTV05], Pseudo-converging walks for consistently-labelled, regular digraphs [R04, RTV05] Pseudo-converging walks for regular digraphs [RTV05] Connectivity for digraphs w/polynomial mixing time [RTV05] RL in L Suffice to prove RL=L Towards RL vs. L? It is not about reversibility but about regularity In fact it is about having estimates on stationary probabilities [CRV07]
24
Some More Open Problems Pseudo-converging walks on an (inconsistently labelled) clique. (Similarly, universal traversal sequence). Undirected Dirichlet Problem: Input: undirected graph G, a vertex s, a set B of vertices, a function f: B → [0, 1]. Output: estimation of f(b) where b is the entry point of the random walk into B.
25
Conclusions Richness of research directions and open problems towards RL=L and beyond: PRGs against space bounded computations Directed connectivity. Even if you think that NL=L is plain crazy, many interesting questions and some beautiful research …
26
Widescreen Test Pattern (16:9) Aspect Ratio Test (Should appear circular) 16x9 4x3
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.