Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Tight Bounds for Distributed Functional Monitoring David Woodruff IBM Almaden Qin Zhang Aarhus University MADALGO Based on a paper in STOC, 2012.
Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders Speaker: Shahar Dobzinski Based on joint works with Noam Nisan & Michael Schapira.
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Sublinear Algorithms … Lecture 23: April 20.
Theory of Computing Lecture 23 MAS 714 Hartmut Klauck.
The Communication Complexity of Approximate Set Packing and Covering
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
Totally Unimodular Matrices
An Approximate Truthful Mechanism for Combinatorial Auctions An Internet Mathematics paper by Aaron Archer, Christos Papadimitriou, Kunal Talwar and Éva.
The Stability of a Good Clustering Marina Meila University of Washington
Approximation Algorithms for Capacitated Set Cover Ravishankar Krishnaswamy (joint work with Nikhil Bansal and Barna Saha)
Multicut Lower Bounds via Network Coding Anna Blasiak Cornell University.
The number of edge-disjoint transitive triples in a tournament.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Rotem Zach November 1 st, A rectangle in X × Y is a subset R ⊆ X × Y such that R = A × B for some A ⊆ X and B ⊆ Y. A rectangle R ⊆ X × Y is called.
CS151 Complexity Theory Lecture 6 April 15, 2015.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Duality Lecture 10: Feb 9. Min-Max theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum Cut Both.
On Testing Convexity and Submodularity Michal Parnas Dana Ron Ronitt Rubinfeld.
1 On the Computation of the Permanent Dana Moshkovitz.
Distributed Combinatorial Optimization
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 13 June 22, 2005
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
1/24 Algorithms for Generalized Caching Nikhil Bansal IBM Research Niv Buchbinder Open Univ. Israel Seffi Naor Technion.
Decision Procedures An Algorithmic Point of View
Complexity Theory Lecture 2 Lecturer: Moni Naor. Recap of last week Computational Complexity Theory: What, Why and How Overview: Turing Machines, Church-Turing.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Duality Theory LI Xiaolei.
Quantum Computing MAS 725 Hartmut Klauck NTU TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A.
Channel Capacity.
1 Prune-and-Search Method 2012/10/30. A simple example: Binary search sorted sequence : (search 9) step 1  step 2  step 3  Binary search.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Weikang Qian. Outline Intersection Pattern and the Problem Motivation Solution 2.
Asymmetric Communication Complexity And its implications on Cell Probe Complexity Slides by Elad Verbin Based on a paper of Peter Bro Miltersen, Noam Nisan,
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Randomization Carmella Kroitoru Seminar on Communication Complexity.
Umans Complexity Theory Lectures Lecture 7b: Randomization in Communication Complexity.
Data Stream Algorithms Lower Bounds Graham Cormode
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
The Message Passing Communication Model David Woodruff IBM Almaden.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Approximation Algorithms based on linear programming.
Lower Bounds on Extended Formulations Noah Fleming University of Toronto Supervised by Toniann Pitassi.
Lap Chi Lau we will only use slides 4 to 19
Information Complexity Lower Bounds
Chap 10. Sensitivity Analysis
Topics in Algorithms Lap Chi Lau.
Chap 9. General LP problems: Duality and Infeasibility
Linear sketching with parities
Linear sketching over
Linear sketching with parities
Chapter 5. The Duality Theorem
Flow Feasibility Problems
Shengyu Zhang The Chinese University of Hong Kong
Presentation transcript:

Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore

Two players Alice and Bob want to cooperatively compute a function f(x,y) Alice knows x, Bob knows y How much communication is needed? f(x,y)

Equality: EQ(x,y)=1 iff x=y Disjointness: DISJ(x,y)=1 iff x and y are disjoint sets Inner Product mod 2: IP(x,y)=1 iff  i x i Æ y i is odd Representation as matrices: M(x,y) contains entries f(x,y)

The set of inputs that share the same message sequence form a combinatorial rectangle in the communication matrix Proof sketch: Alice’s first message depends on her input only, partitions the rows Then Bobs message partitions the columns etc. The messages partition the communication matrix into combinatorial rectangles

For any integer matrix M the partition number is the minimum number of monochromatic rectangles needed to partition M Clearly: D(M) ¸ log part(M) The covering number is the minimum number of rectangles needed to cover the entries of M with monochromatic rectangles This corresponds to nondeterministic protocols

D(EQ)=n+1 Consider the inputs x,x No two inputs x,x and y,y can be in the same combinatorial rectangle Otherwise x,y is also in the same rectangle! Hence we need at least 2 n 1-rectangles to cover the 1- inputs of EQ On the other hand n rectangles are enough to cover the 0-inputs of EQ

It is known that log part(f) cannot be too far from D(f) for Boolean f But part(f) is not always easy to determine We will consider different relaxations of part(f) that are easier to calculate

We can relax the partition requirement for Boolean functions If M can be partitioned into k 1-rectangles, then M can be written as the sum of k rank 1 matrices Ie. rank(M) · k Examples: rank(EQ)=2 n, rank(DISJ)=2 n -1, rank(IP)=2 n -1

Let M be a Boolean matrix We know that D(M) ¸ log rank(M) Conjecture: D(M) · poly(log rank(M)) Best known upper bound is rank(M) There is a polynomial gap D(f)=  (n), log rank(f)=n 0.61 Conjecture: quadratic gap

Observation: 1-rectangles are rank one matrices with nonnegative entries only prank(M) is the minimum k such that M can be written as the sum of k rank 1 matrices with nonnegative entries Clearly D(M) ¸ log prank(M) for all Boolean M Now we get a bound that is polynomially tight: D(M) · O(log prank(M) ¢ log prank(J-M)) for all Boolean M J is the all ones matrix

log prank(M) · poly(log rank(M)) for all Bool. M Every Bool. m £ n rank r matrix M has a monochromatic submatrix of size mn/2 polylog(r) Every Bool. m £ n rank r matrix M has a submatrix of size mn/2 polylog(r) that has rank <.99r The rank conjecture has also been related to some open problems in additive combinatorics

Computing prank is NP-hard Always gives polynomially tight bounds, but hard to show

In general it is hard to estimate the partition number (number of rectangles in a partition of the comm. matrix) Idea: write as an integer program, relax into a lin. program, estimate via the dual The dual is a max. problem!

Consider the set R of all 1-monochromatic rectangles in M Every R 2 R gets a weight w R 2 {0,1} Minimize  w R such that for all x,y with f(x,y)=1:  R:x,y 2 R w R =1 (implicit: for all x,y with f(x,y)=0:  R: x,y 2 R w R =0) The optimum is the partition number

R: set of all 1-monochromatic rectangles in M Every R 2 R gets a nonnegative real weight w R Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R =1 (implicit: for all x,y with f(x,y)=0:  R: x,y 2 R w R =0) The optimum of this LP lower bounds the partition number

A variant of the LP lower bounds the (one-sided) nondeterministic CC: Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R ¸ 1 Denote the optimal value B(M) Use max of B(M), B(J-M) to show lower bounds on det. CC Then: D(M) · O(log B(M) ¢ log B(J-M)) + log 2 n Bounds for D(M) obtainied this way are never worse than quadratically smaller than D(M)

In the dual there is one real variable for every input x,y Max  Á x,y such that for all 1-chromatic R:  x,y 2 R Á x,y · 1 In other words, put weights on inputs to “balance” the weights on each 1-chromatic rectangle For the dual of the nondeterministic LP bound the variables must have nonnegative values

Max  Á x,y such that for all 1-chromatic R:  x,y 2 R Á x,y · 1 Suppose that all Á x,y ¸ 0 After rescaling the Á x,y can be regarded as a probability distribution The scaling factor is the max size of matrices that are 1-monochromatic under the distribution

Equality: Weights Á x,x = 1 The only 1-monochromatic rectangles contain exactly one input x,x Thus B(EQ) ¸ 2 n Inner Product modulo 2: Consider f(x,y)=1-IP(x,y) 1-chromatic rectangles A £ B satisfy that A ? B Hence dim(A)+dim(B) · n ) |A| ¢ |B| · 2 n All 1-inputs get weight ½ n There are ¸ 2 2n-1 1-inputs Hence B(f) ¸ 2 n /2

Disjointness: 1-inputs satisfy  i x i Æ y i = 0 There are 3 n 1-inputs 1-chrom. rectangles A £ B have A ? B Hence still |A| ¢ |B| · 2 n Give weights 1/2 n We get the bound B(DISJ) ¸ 3 n /2 n

Recall the primal program: Minimize  w R such that for all x,y with f(x,y)=1:  R: x,y 2 R w R =1 We don’t allow x,y to be “covered too much” In the dual this means we can use negative weights Á x,y Makes it easier to satisfy constraints Have not used that in our examples

Promise-Nondisjointness: f(x,y)=0 if  i x i Æ y i =0 f(x,y)=1 if  i x i Æ y i =1 otherwise f is undefined The LP for f with the constraint  R: x,y 2 R w R ¸ 1 has optimum · n Use the following LP: Minimize  w R such that for all x,y with f(x,y)=1:  R:x,y 2 R w R = 1 for all x,y with f(x,y)=0:  R: x,y 2 R w R = 0 for all other x,y:  R: x,y 2 R w R · 1 z

We have to exhibit a solution to the dual We should put positive weights on inputs with intersections size 1 Negative weights on inputs with larger intersection size Choose 2 Weight 0 elsewhere

The following fact is useful [Razborov 92] Let ¹ k denote the uniform distribution on x,y with |x Å y|=k and |x|=|y|=n/4 Then for all large enough R=A £ B ¹ x 1 (R) ¸ (1- ² ) ¹ 0 (R) This mean all large rectangles are corrupted Similarly ¹ 2 (R) ¸ (1- ² ) 2 ¹ 1 (R) for all 1-chrom. R Hence putting 1- ² the weight on x,y with intersection 2 as on x,y with intersection 1 is enough to satisfy the constraints for all large 1-chrom. rectangles Constraints are also true for small rectangles if weights are not too large Total weight is 2  n)

Readily generalizes to bounded error protocols Can be generalized to deal with quantum protocols Proof consist of exhibiting a dual solution For randomized: Use contraints ¸ 1- ² and · 1 in the primal program

Primal: Min  w R For all x,y with f(x,y)=1:  R:x,y 2 R w R ¸ 1- ² For all x,y with f(x,y)=0:  R: R w R · ² This is the rectangle/corruption bound

Razborov showed that there is a distribution on inputs such that for all large R the fraction of 0-inputs is ² time the fraction of 1-inputs This corresponds to a solution of the dual

Previously we bounded the size of monochromatic rectangles In the bounded error scenario we want to bound the size of almost monochromatic rectangles Often this is much harder! What about the bias? Often easier to bound But usually not good enough

Fix a distribution ¹ on inputs Then the discrepancy of a rectangle is | ¹ (R Å 1)- ¹ (R Å 0)| Disc(f)=min log of the above Quite easy to show that Disc (f) is a lower bound on D(f) Even for randomized, quantum, and (weakly) unbounded error

Disc(IP)=  (n) All rectangles are almost balanced Disc(DISJ)=O(log n) Nondeterministic complexity is small, hence large monochromatic rectangles exist The method fails to capture either randomized or quantum communication complexity

Re-visit the result about inner product Indeed it is hard to compute IP even with very small error Error 1/2-½ ²n BUT many functions are close to IP even if their discrepancy is small “Generalized” or “Smooth” discrepancy

Take the function Maj(x,y)=1 iff  i x i Æ y i ¸ n/2 Easy to see that Disc(Maj)=O(log n) Nevertheless Maj is close enough to IP to inherit the lower bound  (n) for quantum protocols

Every rectangle R gets a real weight w R Min  w R such that for all x,y with f(x,y)=1:  R contains x,y w R 2 [1- ²,1] for all x,y with f(x,y)=0:  R contains x,y w R 2 [0, ² ] Dual: Put weights on inputs such that all large rectangles are almost balanced Difference to the rectangle bound: two-sided balance condition

Relaxing the rank bound further Why? Motivated by quantum communication Norm based methods allow to deal with entanglement in quantum protocols We will arrive at (almost) the same quantity as above (the LP bound) (via Grothendieck’s inequality)

D(f) ¸ log rank(f) ¸ ||A|| 2 /mn There is another relaxation of the rank: ° 2 =max u,v ||M ± u ¢ v|| tr rank(M) ¸ ° 2 (M) 2 Then define ° 2 ® as the minimum ° 2 of any M that is ® - close to M This method subsumes all previous methods for lower bounding quantum cc