The story of superconcentrators The missing link

Slides:



Advertisements
Similar presentations
Randomness Conductors Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Advertisements

How to Solve Longstanding Open Problems In Quantum Computing Using Only Fourier Analysis Scott Aaronson (MIT) For those who hate quantum: The open problems.
Parikshit Gopalan Georgia Institute of Technology Atlanta, Georgia, USA.
Bounded-depth circuits: Separating wires from gates Michal Koucký Joint work with: Pavel Pudlák and Denis Thérien.
Boolean Circuits of Depth-Three and Arithmetic Circuits with General Gates Oded Goldreich Weizmann Institute of Science Based on Joint work with Avi Wigderson.
Are lower bounds hard to prove? Michal Koucký Institute of Mathematics, Prague.
Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
Lower bounds for small depth arithmetic circuits Chandan Saha Joint work with Neeraj Kayal (MSRI) Nutan Limaye (IITB) Srikanth Srinivasan (IITB)
Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Finite monoids, regular languages, circuit complexity and logic Pascal Tesson Laval University, Quebec City, Canada.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work done at Harvard, IAS, and Columbia Joint work with Ronen Shaltiel.
Hardness Results for Problems
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Pseudorandomness Emanuele Viola Columbia University April 2008.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
Endre Szemerédi & TCS Avi Wigderson IAS, Princeton.
Circuit Complexity of Regular Languages Michal Koucký (Institute of Mathemaics, AS ČR, Praha)
Logic Circuits Chapter 2. Overview  Many important functions computed with straight-line programs No loops nor branches Conveniently described with circuits.
Quantum Computing MAS 725 Hartmut Klauck NTU
One-way multi-party communication lower bound for pointer jumping with applications Emanuele Viola & Avi Wigderson Columbia University IAS work done while.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
On approximate majority and probabilistic time Emanuele Viola Institute for advanced study Work done during Ph.D. at Harvard University June 2007.
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Pseudorandom generators for group products Michal Koucký Institute of Mathematics, Prague Prajakta Nimbhorkar Pavel Pudlák IMSC, Chenai IM, Prague IMSC,
Norms, XOR lemmas, and lower bounds for GF(2) polynomials and multiparty protocols Emanuele Viola, IAS (Work partially done during postdoc at Harvard)
Hardness amplification proofs require majority Emanuele Viola Columbia University Work also done at Harvard and IAS Joint work with Ronen Shaltiel University.
Lower Bounds Emanuele Viola Columbia University February 2008.
Pseudo-random generators Talk for Amnon ’ s seminar.
Eric Allender Rutgers University Curiouser and Curiouser: The Link between Incompressibility and Complexity CiE Special Session, June 19, 2012.
1 SAT SAT: Given a Boolean function in CNF representation, is there a way to assign truth values to the variables so that the function evaluates to true?
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Random projections and depth hierarchy theorems
From Classical Proof Theory to P vs. NP
Probabilistic Algorithms
Information Complexity Lower Bounds
L is in NP means: There is a language L’ in P and a polynomial p so that L1 ≤ L2 means: For some polynomial time computable map r :  x: x  L1 iff.
New Characterizations in Turnstile Streams with Applications
Negation-Limited Formulas
Algorithms and representations Structural vs. functional test
Sublinear-Time Error-Correction and Error-Detection
A new characterization of ACC0 and probabilistic CC0
Circuit Lower Bounds A combinatorial approach to P vs NP
Umans Complexity Theory Lectures
Growth of functions CSC317.
Communication Complexity as a Lower Bound for Learning in Games
Branching Programs Part 3
Intro to Theory of Computation
Perspective on Lower Bounds: Diagonalization
Complexity of Expander-Based Reasoning and the Power of Monotone Proofs Sam Buss (UCSD), Valentine Kabanets (SFU), Antonina Kolokolova.
Tight Fourier Tails for AC0 Circuits
An average-case lower bound against ACC0
Discrete Mathematics CS 2610
CSE838 Lecture notes copy right: Moon Jung Chung
Chapter 11 Limitations of Algorithm Power
Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols Lijie Chen MIT Ruosong Wang CMU.
Asymmetric Transitivity Preserving Graph Embedding
CS151 Complexity Theory Lecture 5 April 16, 2019.
On Derandomizing Algorithms that Err Extremely Rarely
Recent Structure Lemmas for Depth-Two Threshold Circuits
§4 Computational Complexity
Emanuele Viola Harvard University October 2005
Pseudorandomness: New Results and Applications
Presentation transcript:

The story of superconcentrators The missing link Michal Koucký Institute of Mathematics, Prague

Computational complexity How much computational resources do we need to compute various functions. (time, space, etc.) Upper bounds (algorithms). Lower bounds.

Lower bound techniques We have very little understanding of actual computation. Diagonalization. Gödel, Turing, … Information theory. Shannon, Kolmogorov, … Other special techniques – random restrictions, approximation by polynomials. Ajtai, Sipser, Razborov, …

Integer Addition n+1 bits c=a+b b a n bits

Circuits y1 y2 … yn-1 yn Output     depth d    Input x1 … … xi xm gates are of arbitrary fan-in and may compute arbitrary Boolean functions. size of circuit = number of wires.

Circuits vs Turing machines polynomial size circuits ~ polynomial time computation Open: Exponential time computation cannot be simulated by polynomial size circuits.

Integer Addition n+1 bits c=a+b 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Integer Addition n+1 bits c=a+b 0 1 0 0 1 0 1 1 1 0 1 0 1 1 0 0

Integer Addition n+1 bits c=a+b 0 0 1 0 1 1 0 1 0 0 0 1 0 1 1 0

Integer Addition n+1 bits c=a+b 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0

Integer Addition n+1 bits c=a+b 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0

Integer Addition n+1 bits c=a+b 0 0 1 1 0 1 0 0 0 0 0 1 0 0 0 0

Connectivity property c=a+b b a X For any two interleaving sets X and Y, where X are inputs a and Y are outputs c there are |X|=|Y| vertex disjoint paths between X and Y in any circuit computing integer addition.

Superconcentrators [Valiant’75] Y Out = f(X,Y) In X For any k, any X, and any Y, |X|=|Y|=k f(X,Y) = k  Can be built using O(n) wires. Oooopss!

Relaxed superconcentrators [Dolev et al.’83] Y Out d = f(X,Y) In X For any k, random X, and random Y, |X|=|Y|=k EX,Y[f(X,Y)] ≥ δk  Fixed depth requires superlinear number of wires!

Bounds on relaxed superconcetrators [Dolev, Dwork, Pippinger, and Wigderson ’83, Pudlák’92] depth d circuits size Ω(…) d=2 n log n d=3 n log log n d=2k or d=2k+1 n λk(n) where λ1(n) = log n and λk+1(n) = λk*(n) Applications [Chandra, Fortune, and Lipton ’83]

Depth-1 circuits for Prefix-XOR y1 y2 … yn-1 yn      x1 … x2 xn → total size Θ(n2) Prefix-XOR: yk = x1  x2  …  xk-1  xk

Depth-2 circuits for Prefix-XOR y1 … yj … yn Output    n n/2i 1 Input x1 … … xi xn Each middle block computes n/2i parities of input blocks of size 2i i=1, …, log n → the total size is O(n log n)

Variants of superconcetrators For any k, sets X, Y where |X|=|Y|=k any X and any Y f(X,Y) = k (≥ δk) superconcetrators any X and random Y EY[f(X,Y)] ≥ δk middle ground random X and random Y EX,Y[f(X,Y)] ≥ δk relaxed superconcetrators

Comparison of depth-d superconcentrators d=2 size Θ(…) superconcentrators n (log n)2/log log n middle ground n (log n/log log n)2 relaxed superconcentrators n log n d=2k or d=2k+1 all variants n λk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

Good error-correcting codes 0<ρ,δ<1 constants, m < n: enc : {0,1}m → {0,1}n For any x, x’  {0,1}m, where x  x’ distHam(enc(x),enc(x’)) ≥ δn. m ≥ ρn. Applications: zillions

Connectivity of circuits computing codes Out = f(X,Y) In X For any k, any X, and randomly chosen Y, |X|=|Y|=k EY[f(X,Y)] ≥ δk [Gál, Hansen, K., Pudlák, Viola ‘12]

Comparison of depth-d superconcentrators d=2 size Θ(…) superconcentrators n (log n)2/log log n middle ground n (log n/log log n)2 relaxed superconcentrators n log n d=2k or d=2k+1 all variants n λk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

Single output functions X y (c*ac*b)*c* [K. Pudlák, and Thérien ’05]  circuits must contain relaxed superconcentrators

Recent improvements Explicit functions (matrix multiplication) [ Cherukhin ‘08, Jukna ’10, Drucker ‘12] depth d circuits size Ω(…) d=2 n3/2 d=3 n log n d=4 n log log n d=2k+1 or d=2k+2 n λk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

Conclusions Information theory is the strongest lower bound tool we currently have (unfortunately).