Pseudorandom generators for group products Michal Koucký Institute of Mathematics, Prague Prajakta Nimbhorkar Pavel Pudlák IMSC, Chenai IM, Prague IMSC,

Slides:



Advertisements
Similar presentations
Walk the Walk: On Pseudorandomness, Expansion, and Connectivity Omer Reingold Weizmann Institute Based on join works with Michael Capalbo, Kai-Min Chung,
Advertisements

Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
1/17 Deterministic Discrepancy Minimization Nikhil Bansal (TU Eindhoven) Joel Spencer (NYU)
Better Pseudorandom Generators from Milder Pseudorandom Restrictions Raghu Meka (IAS) Parikshit Gopalan, Omer Reingold (MSR-SVC) Luca Trevian (Stanford),
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Noga Alon Institute for Advanced Study and Tel Aviv University
Two Query PCP with Sub-constant Error Dana Moshkovitz Princeton University Ran Raz Weizmann Institute 1.
Searching for the Minimal Bézout Number Lin Zhenjiang, Allen Dept. of CSE, CUHK 3-Oct-2005
On Approximating the Average Distance Between Points Kfir Barhum, Oded Goldreich and Adi Shraibman Weizmann Institute of Science.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
Constant Degree, Lossless Expanders Omer Reingold AT&T joint work with Michael Capalbo (IAS), Salil Vadhan (Harvard), and Avi Wigderson (Hebrew U., IAS)
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Distributed Coloring in Õ(  log n) Bit Rounds COST 293 GRAAL and.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
On the Crossing Spanning Tree Vineet Goyal Joint work with Vittorio Bilo, R. Ravi and Mohit Singh.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
Point Location Computational Geometry, WS 2007/08 Lecture 5 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät für.
Lecture 6: Point Location Computational Geometry Prof. Dr. Th. Ottmann 1 Point Location 1.Trapezoidal decomposition. 2.A search structure. 3.Randomized,
1 Constructing Pseudo-Random Permutations with a Prescribed Structure Moni Naor Weizmann Institute Omer Reingold AT&T Research.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Module C9 Simulation Concepts. NEED FOR SIMULATION Mathematical models we have studied thus far have “closed form” solutions –Obtained from formulas --
Tight Integrality Gaps for Lovász-Schrijver LP relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani UC Berkeley.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
ICALP'05Stochastic Steiner without a Root1 Stochastic Steiner Trees without a Root Martin Pál Joint work with Anupam Gupta.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Estimating Entropy for Data Streams Khanh Do Ba, Dartmouth College Advisor: S. Muthu Muthukrishnan.
Pseudorandom Generators for Combinatorial Shapes 1 Parikshit Gopalan, MSR SVC Raghu Meka, UT Austin Omer Reingold, MSR SVC David Zuckerman, UT Austin.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
An FPTAS for #Knapsack and Related Counting Problems Parikshit Gopalan Adam Klivans Raghu Meka Daniel Štefankovi č Santosh Vempala Eric Vigoda.
Testing the independence number of hypergraphs
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Amplifying lower bounds by means of self- reducibility Eric Allender Michal Koucký Rutgers University Academy of Sciences Czech Republic Czech Republic.
Online Social Networks and Media
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Panther: Fast Top-k Similarity Search in Large Networks JING ZHANG, JIE TANG, CONG MA, HANGHANG TONG, YU JING, AND JUANZI LI Presented by Moumita Chanda.
RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan,
1 Analysis of Non-fortuitous Predictive States of the RC4 Keystream Generator Souradyuti Paul and Bart Preneel K.U. Leuven, ESAT/COSIC Indocrypt 2003 India.
February 17, 2005Lecture 6: Point Location Point Location (most slides by Sergi Elizalde and David Pritchard)
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Sampling Distributions
Stochastic Streams: Sample Complexity vs. Space Complexity
New Characterizations in Turnstile Streams with Applications
Coding, Complexity and Sparsity workshop
A new characterization of ACC0 and probabilistic CC0
Path Coupling And Approximate Counting
Chapter 7 Sampling Distributions.
Exact Inference Continued
Density Independent Algorithms for Sparsifying
Complexity of Expander-Based Reasoning and the Power of Monotone Proofs Sam Buss (UCSD), Valentine Kabanets (SFU), Antonina Kolokolova.
Tight Fourier Tails for AC0 Circuits
Chapter 7 Sampling Distributions.
K-wise vs almost K-wise permutations, and general group actions
Chapter 7 Sampling Distributions.
Chapter 7 Sampling Distributions.
The story of superconcentrators The missing link
Lecture 6: Counting triangles Dynamic graphs & sampling
Chapter 7 Sampling Distributions.
Switching Lemmas and Proof Complexity
Fundamental Sampling Distributions and Data Descriptions
Presentation transcript:

Pseudorandom generators for group products Michal Koucký Institute of Mathematics, Prague Prajakta Nimbhorkar Pavel Pudlák IMSC, Chenai IM, Prague IMSC, Chenai IM, Prague

2 Branching programs j i p ij = Pr[ reaching j from i ] models randomized space bounded computation models randomized space bounded computation space s → width w ≈ 2 O( s ) w t

3 Goal: Estimate probabilities p ij (up-to additive error ε) in small space. Possible solution: Find a small set F  {0,1} t so that p ij ’s are well approximated by taking a random path according to a random sample from F. Want: A single set F working for all branching programs of length n, width n, and all i and j. → a random set F of size 2 O(log n + log 1/ε) will do.

4 Goal: Find an explicit set F  {0,1} n, i.e., F : {0,1} l → {0,1} n computable in small space, where l ≈ O(log n + log 1/ε). Our result: Explicit F : {0,1} l → {0,1} n, where l = O( (2 O(w log w) + log 1/ε) ∙ log n ) that works for all permutation branching programs of width w and length n. permutation b.p. … in each layer the 0-edges form a permutation and 1-edges form a permutation. permutation b.p. … in each layer the 0-edges form a permutation and 1-edges form a permutation.

5 r 1 r 2 r n Equivalent formulation for group products [MZ]: A fixed group G and elements g 1, g 2, …, g n  G approximate thedistribution R on G given by where r 1, r 2, … r n  R {0,1} We have: F : {0,1} l → {0,1} n so that r 1, r 2, …, r n given by the output of F approximate R well for any choice of g 1, g 2, …, g n  G. l = O( (|G| O(1) + log 1/ε) ∙ log n ) We have: F : {0,1} l → {0,1} n so that r 1, r 2, …, r n given by the output of F approximate R well for any choice of g 1, g 2, …, g n  G. l = O( (|G| O(1) + log 1/ε) ∙ log n ) For G=({0,1},+) → ε-biased spaces. For G=({0,1},+) → ε-biased spaces. g 1 ∙ g 2 ∙ ∙ ∙ g n

6 Known results: width n and length n width n and length n [Nisan92] l = O( log 2 n ) [Nisan92] l = O( log 2 n ) [INW94] l = O( log 2 n ) [INW94] l = O( log 2 n ) width w and length n (permutation/regular) width w and length n (permutation/regular) [BV10] l = O( (w 4 log log n + log 1/ε ) log n ) [BV10] l = O( (w 4 log log n + log 1/ε ) log n ) [BRRY10] l = O( (log w + log log n + log 1/ε ) log n) [BRRY10] l = O( (log w + log log n + log 1/ε ) log n) ours l = O( (2 O(w log w) + log 1/ε) log n ) ours l = O( (2 O(w log w) + log 1/ε) log n ) other combinatorial structures other combinatorial structures [LRTV10, MZ09, GMRZ11] l = O( log n + log O(1) 1/ε) cyclic groups [LRTV10, MZ09, GMRZ11] l = O( log n + log O(1) 1/ε) cyclic groups …

7 Techniques: Convolution * R 1, R 2 probability distributions on G R 1 * R 2 probability distribution on G s.t. for any g  G R 1 * R 2 (g) = ∑ h  G R 1 (h) ∙ R 2 (h -1 g) R 1 * R 2 (g) = ∑ h  G R 1 (h) ∙ R 2 (h -1 g) Examples: Examples: r 1 r n/2 r n/2+1 r n g 1 ∙ ∙ ∙ g n/2 * g n/2+1 ∙ ∙ ∙ g n r 1 r 2 r n g 1 * g 2 * * g n

8 Recursive convolution (~INW): D 1 D 2 D 1 D 2 a 1 … a n/2 and a n/2+1 … a n obtained using F n/2 : {0,1} l → {0,1} n/2 1. F n (s,s’) = F n/2 (s) ◦ F n/2 (s’) → D 1 * D 2 leads to F n : {0,1} O( n ) → {0,1} n 2. F n (s,d) = F n/2 (s) ◦ F n/2 ( s(d) )→ D 1 *γ D 2 leads to F n : {0,1} O( k log n ) → {0,1} n s(d) … d-th neighbor of s in a k-regular expander on 2 l vertices a 1 a n/2 a n/2+1 a n g 1 ∙ ∙ ∙ g n/2 * g n/2+1 ∙ ∙ ∙ g n

9 D 1 *γ D 2  D 1 * D 2 – D 1 *γ D 2  < γ D 1 *γ D 2  D 1 * D 2 – D 1 *γ D 2  < γ Thm: If R 1, R 2, … R N are distributions obtained from group products, F is a formula built from R 1, R 2, … R N using *, and F’ is obtained from F by replacing * with *γ then  D F – D F’  < γ 2 c|G| 11 * *γ * *γ R 1 * R 1 *γ * R 4 *γ R 4 R 2 R 3 R 2 R 3 R 2 R 3 R 2 R 3 FF’

10 Proof ideas: D 1, D 2, R 1, R 2 distr. on G D 1 = R 1 + ε 1 D 2 = R 2 + ε 2 D 1 = R 1 + ε 1 D 2 = R 2 + ε 2 where ∑ h  G ε 1 (h) = 0 ∑ h  G ε 2 (h) = 0 D 1 * D 2 = R 1 * R 2 + ε 1 * R 2 + R 1 * ε 2 + ε 1 * ε 2 D 1 * D 2 = R 1 * R 2 + ε 1 * R 2 + R 1 * ε 2 + ε 1 * ε 2 D 1 *γ D 2 = … + ε γ D 1 *γ D 2 = … + ε γ where  ε γ  < γ 1. If R 2 is uniform then ε 1 * R 2 = If R 2 is close to uniform then ε 1 * R 2 is close to If the support of R 2 is the whole group G then  ε 1 * R 2  < (1-δ)  ε 1 .

11 Open problems Improve dependence on the width of the branching program/group size, and on the error ε. Improve dependence on the width of the branching program/group size, and on the error ε. Remove restrictions on the branching programs Remove restrictions on the branching programs