Presentation is loading. Please wait.

Presentation is loading. Please wait.

Additive Combinatorics in Theoretical Computer Science Shachar Lovett (UCSD)

Similar presentations


Presentation on theme: "Additive Combinatorics in Theoretical Computer Science Shachar Lovett (UCSD)"— Presentation transcript:

1 Additive Combinatorics in Theoretical Computer Science Shachar Lovett (UCSD)

2 What is Additive Combinatorics? Broad interpretation: study of subsets of algebraic structures, relations between various notions of structure (algebraic, combinatorial, statistical) Original motivation: number theory Recently, becoming an influential tool in theoretical computer science

3 Additive Combinatorics in Theoretical Computer Science Why is additive combinatorics useful in computer science? Algebra is very useful: algorithm design, analysis, problem analysis, lower bounds Additive combinatorics allows us to analyze also approximate algebraic objects

4 Applications List of applications is constantly growing… Arithmetic complexity Communication complexity Cryptography Coding theory Randomness extractors Lower bounds

5 This talk For concreteness, I will focus on one mathematical theme: structure in inner products I will describe applications in 4 domains: Communication complexity Coding theory Cryptography Randomness extractors Applications introduce new mathematical problems

6 Communication complexity Structure of low rank Boolean matrices

7 Communication Complexity Two parties (Alice and Bob), each holding an input, wish to jointly compute a function of their inputs, while minimizing communication x y f(x,y)

8 Communication Complexity Let x,y  [n], f(x,y)  {-1,+1} Identify function f(x,y) with n x n boolean matrix        x y f(x,y)

9 Communication Complexity Let x,y  [n], f(x,y)  {-1,+1} Identify function f(x,y) with n x n boolean matrix (Deterministic) Protocol = partition of matrix to monochromatic rectangles        c bit protocol  Partition to 2 c monochromatic rectangles x y f(x,y)

10 The log-rank lower bound If f(x,y) has a c-bit protocol, it can be partitioned to 2 c monochromatic rectangles Monochromatic rectangles have rank 1 Hence, #rectangles  rank of f [Mehlhorn-Schmidt ’82]       

11 The log-rank conjecture Log-rank conjecture: any Boolean matrix of rank r (over the reals) can be partitioned to exp(log c r) monochromatic rectangles [Lovász-Saks ‘88] Sufficient to prove that all low rank matrices contain at least one large monochromatic rectangle [Nisan-Wigderson ‘95]

12 The log-rank conjecture Log-rank conjecture: Boolean matrice of rank r can be partitioned to exp(log c r) monochromatic rectangles [Lovász-Saks ‘88] Initial conjecture: c=1. We now know c  1.63 [Kushilevitz’ 94] Equivalent conjectures formulated in other domains: Relation between rank and chromatic number of graphs [Nuffelen’ 76, Fajtlowicz ’87] Relation between rank and positive rank for Boolean matrices [Yannakakis ‘91]

13 Open problem 1: log rank conjecture       

14 Progress came from different directions Trivial bound: #partitions  2 r Graph theory: chromatic number  2 0.4r [Kotlov-Lovasz ’96,Kotlov ’97] Additive combinatorics: assuming the polynomial Freiman-Ruzsa conjecture, #partitions  exp(r/logr) [BenSasson-L-Zewi’12] Discrepancy theory: #partitions  exp(r 1/2 logr) [L’14] Next step: ??? [your name here]

15 References Lovasz-Saks ’88: Lattices, Mobius Functions and Communication Complexity Nisan-Wigderson ‘95: On Rank vs. Communication Complexity Kotlov ‘97: The rank and size of graphs BenSasson-Lovett-Zewi ‘12: An Additive Combinatorics Approach Relating Rank to Communication Complexity Lovett ‘14: Communication is bounded by root of rank

16 Coding theory Locally decodable codes

17 Traditional error correcting codes Classic error correcting codes: But what if we want just part of the message? We still need to decode all of it. Encoder Decoder MessageCodeword Received word Message (hopefully) NOISE

18 Locally decodable code Allow to decode parts of the message efficiently Ideally, to decode S bits in message, read only O(S) bits in codeword Encoder Decoder MessageCodeword Received word Part of message NOISE

19 Example: Hadamard code

20 Efficient locally decodable codes Challenge: locality and efficiency Polynomial codes (Reed-Muller) allow to encode n bits to m=O(n) bits, but decoding even a single bit requires reading n  bits. Recent mutliplicity codes can do this and keep m=n(1+  ) Matching vector codes can achieve O(1) queries per bit, but their length is sub-exponential Related to structured low rank matrices

21 Matching vector codes Matching vector family: u 1,…,u n,v 1,…,v n  (Z m ) r such that (i) =0 (ii)  0 if i  j Equivalently, inner product matrix is a rank-r matrix over Z m with zero diagonal, and nonzero off diagonal ( ) 0 2 3 1 4 1 0 5 3 1 2 3 0 1 1 3 3 4 0 1 2 1 3 3 0

22 Matching vector codes Matching vector family: u 1,…,u n,v 1,…,v n  (Z m ) r such that (i) =0 (ii)  0 if i  j Equivalently, inner product matrix is a rank-r matrix over Z m with zero diagonal, and nonzero off diagonal Give codes encoding n symbols to N=m r symbols, locally decodable with m queries [Yekhanin’08, Efremenko’09,Dvir- Gopalan-Yekhanin’11,…] Goal: m small (constant), minimize rank r

23 Matching vector codes Goal: low rank n x n matrix over Z m such that How low can the rank be? m=2: rank  n-1 m=3: rank  n 1/2 m=p prime: rank  n 1/(p-1) BUT m=6: rank  exp(log 1/2 n) ! [Grolmusz’00] m has t prime factors: rank  exp(log 1/t n) Core of sub-exponential codes ( ) 0 0 0 0 00 00

24 Are these constructions tight? Fix m=6 What is the minimal rank of an n x n matrix over Z 6 with zero diagonal, nonzero off diagonal? Grolmusz: rank  exp(log 1/2 n) Trivial: rank  log n Assuming PFR conjecture, can show rank  log n * loglog n [Bhowmick-Dvir-L’12] Challenge: bridge the gap!

25 Open question 2: matching vector families Construct low rank matrices over Z 6 (or any fixed Z m ) with zero diagonal, nonzero off diagonal Or prove that the Grolmusz construction is optimal ( ) 0 2 3 1 4 1 0 5 3 1 2 3 0 1 1 3 3 4 0 1 2 1 3 3 0

26 References Grolmusz ‘00: Superpolynomial size set-systems with restricted intersections mod 6 and explicit Ramsey graphs Yekhanin ’08: Towards 3-query locally decodable codes of subexponential length Efremenko ’09: 3-query locally decodable codes of subexponential length Dvir-Gopalan-Yekhanin ‘11: Matching vector codes Bhowmick-Dvir-Lovett ‘12: New lower bounds for matching vector codes

27 Cryptography Non-malleable codes

28 Error correcting codes ERRORS are random, caused by stochastic processes (nature) Encoder Decoder MessageCodeword Received word Message (hopefully) NOISE

29 Cryptography ERRORS are caused by an adversary Encrypt Decrypt MessageCodeword Received word Message (hopefully) NOISE

30 Cryptography How to handle adversarial errors? Common solution: use computational hardness Eg assume adversary is computationally bounded, and build encryptions which cannot be broken by such adversary However, these only work based on unproven assumptions. Can we build information theoretic cryptography?

31 Information-theoretic cryptography If the adversary is not bounded, he can do anything: Decode message Change to other evil message Re-encode So, we need to put some information theoretic limitations Encrypt Decrypt MessageCodeword Received word Message (hopefully) NOISE

32 Split state model Information is encoded by 2 (or more) codewords. Assumption: each can be arbitrarily corrupted, but w/o collaboration (no communication) Encrypt Decrypt Message Codeword 1 Received word 1 Message (hopefully) Codeword 2 Received word 2

33 Non malleability What cannot we prevent? Adversaries randomly corrupting the message (decoded message random, should be rejected – need some CRC on messages) Adversaries can agree ahead of time on target message m*, replace codeword with correct codewords for m* We will decode m* correctly BUT! m* does not depend on m (the message that was sent) Ideally: adversaries should NOT be able to make us decode m’=func(m). Encrypt Decrypt Message Codeword 1 Received word 1 Message (hopefully) Codeword 2 Received word 2

34 Potential construction Suggestion: m  F, F is a finite field Encoding: choose two random vectors x,y  F n conditioned on =m Adversaries: replace x with f(x), y with g(y) Decoder output m’= Question: in what ways can depend on ? Encrypt Decrypt Message Codeword 1 Received word 1 Message (hopefully) Codeword 2 Received word 2

35 Potential construction m  F encoded by x,y  F n such that =m Decode: m’= What can the adversaries do: Nothing: m’=m Random: m’ random (only subset of messages are legal) Constant: f(x)=a, g(y)=b, m’= indep. of m Linear: f(x)=2x, g(y)=y, m’=2m (HMM!) Encrypt Decrypt Message Codeword 1 Received word 1 Message (hopefully) Codeword 2 Received word 2

36 Construction is good: Inner products of functions Theorem: arbitrary adversaries reduce to affine transformations (and these can be handled by inner codes) [Aggarwal-Dodis-L’14] F prime field, n large enough (n  poly log|F|). x,y  F n uniform, Any functions f,g:F n  F n Joint distribution (, )  (U, aU+b) where U  F uniform (a,b)  F 2 some distribution, independent of U Conjecture: True for n  O(1)

37 More challenges At the end, we can encode n bits to ~O(n 7 ) bits which two non-communicating adversaries cannot corrupt Challenges: Reduce encoding to O(n) bits Handle adversaries with limited communication

38 References Dziembowski-Pietrzak-Wichs ‘10: Non-malleable codes Chabanne-Cohen-Flori-Patey ‘11. Non-malleable codes from the wire- tap channel Liu-Lysyanskaya’ 12. Tamper and leakage resilience in the split-state model. Dziembowski-Kazana-Obremski ‘13: Non-malleable codes from two- source extractors Aggarwal-Dodis-Lovett ‘14: Non-malleable Codes from Additive Combinatorics. Cheraghchi-Guruswami ’14: Non-malleable coding against bit-wise and split-state tampering.

39 Randomness extraction Can you beat Bourgain?

40 Randomness extraction A “randomness extractor” (usually simple called extractor) is a deterministic function, which can take weak random sources, and combine them to give an almost perfect random source Applications: derandomization, cryptography Extractor Perfect randomness Weak random source

41 Two source extractor

42 Let us focus on extracting just one random bit (r=1) Then, two-source extractors are a strengthening of bi-partite Ramsey graphs A B A B

43 The Hadamard extractor

44 Beating the Hadamard extractor The Hadamard extractor fails on orthogonal subspaces Idea: find large subsets of {0,1} n which have small intersection with subspaces (actually, such that any large subsets of them grow with addition) [Bourgain’05]

45 Bourgain’s extractor

46 Open problem: beat Bourgain Give an explicit construction of a two-source extractor for lower min-entropy Additive combinatorics seems like a useful tool Constructions are known for Ramsey graphs; maybe these can be extended?

47 References Barak-Kindler-Shaltiel-Sudakov-Wigderson ‘05: Simulating independence: New constructions of condensers, Ramsey graphs, dispersers, and extractors. Bourgain ’05: More on the sum-product phenomenon in prime fields and its applications Barak-Impagliazzo-Wigderson ‘06: Extracting randomness using few independent sources Rao ‘07: An exposition of Bourgain's 2-source extractor BenSasson-Zewi ‘11: From affine to two-source extractors via approximate duality

48 Summary

49 Application of additive combinatorics We discussed a specific phenomena: structure in inner products, and saw 4 applications of it Communication complexity: Boolean inner products Coding theory: matching vector family Cryptography: inner products of arbitrary functions Randomness extraction: explicit sets which behave randomly under inner products

50 Other applications Here are some applications we haven’t discussed: Arithmetic complexity: understanding the minimal number of arithmetic operations (additions, multiplications), required to compute polynomials (for example: matrix multiplication, or FFT). Tightly related to incidence geometry

51 Other applications Sub-linear algorithms: algorithms which can analyze global properties of huge objects, based only on local statistics (and hence, they run in time independent of the input size) Graph algorithms are based on graph regularity Algebraic algorithms are based on higher-order Fourier analysis

52 Conclusions Additive combinatorics provides a useful toolbox to attack many problems in theoretical computer science Problems in computer science suggest many new mathematical problems Thank You!


Download ppt "Additive Combinatorics in Theoretical Computer Science Shachar Lovett (UCSD)"

Similar presentations


Ads by Google