Fernando G.S.L. Brandão MSR -> Caltech Faculty Summit 2016

Slides:



Advertisements
Similar presentations
How to Solve Longstanding Open Problems In Quantum Computing Using Only Fourier Analysis Scott Aaronson (MIT) For those who hate quantum: The open problems.
Advertisements

Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Sublinear-time Algorithms for Machine Learning Ken Clarkson Elad Hazan David Woodruff IBM Almaden Technion IBM Almaden.
Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
G5BAIM Artificial Intelligence Methods
Satyen Kale (Yahoo! Research) Joint work with Sanjeev Arora (Princeton)
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
On Combinatorial vs Algebraic Computational Problems Boaz Barak – MSR New England Based on joint works with Benny Applebaum, Guy Kindler, David Steurer,
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
ISE480 Sequencing and Scheduling Izmir University of Economics ISE Fall Semestre.
An Application of Approximation Algorithm for Project Scheduling Problem based on Semi Definite Optimization Authored by T.Bakshi, A.SinhaRay, B.Sarkar,
The Theory of NP-Completeness
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Graph Sparsifiers: A Survey Nick Harvey Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,
Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,
Semidefinite Programming
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
An Introduction to Black-Box Complexity
1 Quantum NP Dorit Aharonov & Tomer Naveh Presented by Alex Rapaport.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Computational aspects of stability in weighted voting games Edith Elkind (NTU, Singapore) Based on joint work with Leslie Ann Goldberg, Paul W. Goldberg,
Pablo A. Parrilo ETH Zürich Semialgebraic Relaxations and Semidefinite Programs Pablo A. Parrilo ETH Zürich control.ee.ethz.ch/~parrilo.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
MCS312: NP-completeness and Approximation Algorithms
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
Adiabatic Quantum Computing Josh Ball with advisor Professor Harsh Mathur Problems which are classically difficult to solve may be solved much more quickly.
Boaz Barak (MSR New England) Fernando G.S.L. Brandão (Universidade Federal de Minas Gerais) Aram W. Harrow (University of Washington) Jonathan Kelner (MIT)
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Approximation Algorithms based on linear programming.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
Fernando G.S.L. Brandão Caltech QIP 2017
The Theory of NP-Completeness
P & NP.
Quantum Gibbs Sampling
Lap Chi Lau we will only use slides 4 to 19
Probabilistic Algorithms
Multiplicative updates for L1-regularized regression
Topics in Algorithms Lap Chi Lau.
Sam Hopkins Cornell Tselil Schramm UC Berkeley Jonathan Shi Cornell
Nonnegative polynomials and applications to learning
Sum of Squares, Planted Clique, and Pseudo-Calibration
A Combinatorial, Primal-Dual Approach to Semidefinite Programs
Possibilities and Limitations in Computation
Background: Lattices and the Learning-with-Errors problem
Structural Properties of Low Threshold Rank Graphs
Analysis and design of algorithm
Hidden Markov Models Part 2: Algorithms
Bin Fu Department of Computer Science
CSCI B609: “Foundations of Data Science”
3rd Lecture: QMA & The local Hamiltonian problem (CNT’D)
Linear Programming and Approximation
Linear Programming Duality, Reductions, and Bipartite Matching
Chapter 11 Limitations of Algorithm Power
Lecture 20 Linear Program Duality
The Theory of NP-Completeness
Presentation transcript:

Fernando G.S.L. Brandão MSR -> Caltech Faculty Summit 2016 Quantum Speed-ups for Semidefinite Programming Fernando G.S.L. Brandão MSR -> Caltech Faculty Summit 2016 based on joint work with Krysta Svore MSR

Quantum Algorithms Exponential speed-ups: Simulate quantum physics, factor big numbers (Shor’s algorithm), …, Polynomial Speed-ups: Searching (Grover’s algorithm: N1/2 vs O(N)), ... Heuristics: Quantum annealing (adiabatic algorithm), machine learning, ...

Solving Semidefinite Programming belongs here Quantum Algorithms Exponential speed-ups: Simulate quantum physics, factor big numbers (Shor’s algorithm), …, Polynomial Speed-ups: Searching (Grover’s algorithm: N1/2 vs O(N)), ... Heuristics: Quantum annealing (adiabatic algorithm), machine learning, ... This Talk: Solving Semidefinite Programming belongs here

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, r-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Some Applications: operations research (location probems, scheduing, ...), bioengineering (flux balance analysis, ...), approximating NP-hard problems (max-cut, ...), field theory (conformal bootstraping), ... Algorithms Interior points: O((m2nr + mn2)log(1/ε)) Multilicative Weights: O((mnr/ε2)) Lower bound: No faster than Ω(nm), for constant ε and r

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, r-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Some Applications: operations research (location probems, scheduling, ...), bioengineering (flux balance analysis, ...), approximating NP-hard problems (max-cut, ...), ... Algorithms Interior points: O((m2nr + mn2)log(1/ε)) Multilicative Weights: O((mnr/ε2)) Lower bound: No faster than Ω(nm), for constant ε and r

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, r-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Some Applications: operations research (location probems, scheduling, ...), bioengineering (flux balance analysis, ...), approximating NP-hard problems (max-cut, ...), ... Algorithms Interior points: O((m2nr + mn2)log(1/ε)) Multiplicative Weights: O((mnr (⍵R)/ε2)) Lower bound: No faster than Ω(nm), for constant ε and r “width” “size of solution”

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, r-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Some Applications: operations research (location probems, scheduling, ...), bioengineering (flux balance analysis, ...), approximating NP-hard problems (max-cut, ...), ... Algorithms Interior points: O((m2nr + mn2)log(1/ε)) Multiplicative Weights: O((mnr (⍵R)/ε2)) Lower bound: No faster than Ω(nm), for constant ε, r, ⍵, R

Semidefinite Programming Normalization: We assume: Reduction optimization to decision: or

SDP Duality Primal: Dual:

Quantum Algorithm for SDP thm There is a quantum algorithm for solving SDPs running in time Õ(n1/2 m1/2 r δ-2 R2)

Quantum Algorithm for SDP thm There is a quantum algorithm for solving SDPs running in time Õ(n1/2 m1/2 r δ-2 R2) Input: n x n matrices r-sparse C, A1, ..., Am and numbers b1, ..., bm Output: Samples from y/||y||2 and ||y||2 Q. Samples from X/tr(X) and tr(X) Primal: Dual: or

Quantum Algorithm for SDP thm There is a quantum algorithm for solving SDPs running in time Õ(n1/2 m1/2 r δ-2 R2) Input: n x n matrices r-sparse C, A1, ..., Am and numbers b1, ..., bm Output: Samples from y/||y||2 and ||y||2 Q. Samples from X/tr(X) and tr(X) Oracle Model: We assume there’s an oracle that outputs a chosen non-zero entry of C, A1, ..., Am at unit cost: choice of Aj row k l non-zero element

Quantum Algorithm for SDP thm There is a quantum algorithm for solving SDPs running in time Õ(n1/2 m1/2 r δ-2 R2) Classical lower bound At least time Ω(max(n, m) for r, δ, R = Ω(1). Reduction from Search Quantum lower bound At least time Ω(max(n1/2, m1/2) for r, δ, R = Ω(1). Reduction from Search Ex. Ω(max(m)): bi = 1, C = I Aj = I for all j (obj = 1) Aj = 2I for random j in [m] and Ak = I for k ≠ j (obj = 1/2)

Quantum Algorithm for SDP thm There is a quantum algorithm for solving SDPs running in time Õ(n1/2 m1/2 r δ-2 R2) thm 2 There is a quantum algorithm for solving SDPs running in time Õ(TGibbs m1/2 r δ-2 R2) If Gibbs states can be prepared quickly (e.g. by quantum metropolis), larger speed-ups possible

Quantum Algorithm for SDP The quantum algorithm is based on a classical algorithm for SDP due to Arora and Kale (2007) based on the multiplicative weight method Let’s review their method

The Oracle Searches for a vector y s.t. i) ii) Dual:

Width of SDP

Arora-Kale Algorithm 1. 2. 3. 4.

Why Arora-Kale works? Since Must check is feasible From Oracle, We need:

Matrix Multiplicative Weight MMW (Arora, Kale ‘07) Given n x n matrices Mt and ε < ½, with and λn : min eigenvalue 2-player zero-sum game interpretation: - Player A chooses density matrix Xt - Player B chooses matrix 0 < Mt<I Pay-off: tr(Xt Mt) “Xt = ⍴t strategy almost as good as global strategy”

Arora-Kale Algorithm 1. 2. 3. 4.

Arora-Kale Algorithm 1. 2. 3. 4. How to implement the Oracle?

Implementing Oracle by Gibbs Sampling Searches for a vector y s.t. i) ii)

Implementing Oracle by Gibbs Sampling Searches for (non-normalized) probability distribution y satisfying two linear constraints: claim: We can take Y to be Gibbs: There are constants N, x, y s.t.

Jaynes’ Principle Then there is a Gibbs state of the form (Jaynes 57) Let ⍴ be a quantum state s.t. Then there is a Gibbs state of the form with same expectation values. Drawback: no control over size of the λi’s.

Finitary Jaynes’ Principle (Lee, Raghavendra, Steurer ‘15) Let ⍴ s.t. Then there is a with s.t. (Note: Used to prove limitations of SDPs for approximating constraints satisfaction problems)

Implementing Oracle by Gibbs Sampling Claim There is a Y of the form with x, y < log(n)/ε and N < α s.t. N < α:

Implementing Oracle by Gibbs Sampling Claim There is a Y of the form with x, y < log(n)/ε and N < α s.t. Can implement oracle by exhaustive searching over x, y, N for a Gibbs distribution satisfying constraints above (only α log2(n)/ε3 different triples needed to be checked)

Arora-Kale Algorithm 1. 2. 3. 4. Using Gibbs Sampling Again, it’s Gibbs Sampling

Quantum Speed-ups for Gibbs Sampling (Poulin, Wocjan ’09, Chowdhury, Somma ‘16) Given a r-sparse D x D Hamiltonian H, one can prepare exp(H)/tr(…) to accuracy ε on a quantum computer with O(D1/2 r ||H|| polylog(1/ ε)) calls to an oracle for the entries of H Based on amplitude amplification

Quantizing Arora-Kale 1. 2. 3. 4. Use Gibbs Sampling Again, Gibbs Sampling

Quantizing Arora-Kale Problem: Mt is not sparse. Solution: Sparsify it using operator Chernoff bound 1. 2. 3. 4. Using Gibbs Sampling Again, Gibbs Sampling

Sparsification Suppose Z1, …, Zk are independent n x n Hermitian matrices s.t. E(Zi)=0, ||Zi||<λ. Then Applying to Sampling j1, …, jk from with k = O(log(n)/ε2)

Quantizing Arora-Kale 1. 2. 3. 4. Use Gibbs sampling Sparsify by random sampling Again, Gibbs sampling

Quantizing Arora-Kale 1. 2. 3. 4. Use Gibbs sampling Takes Õ(n1/2r) time on a quantum computer Sparsify by random sampling Again, Gibbs sampling

Quantizing Arora-Kale 1. 2. 3. 4. Use Gibbs sampling Needs to prepare: exp(∑i hi |i><i|)/tr(…) with hi = x tr(Ai ⍴t) + y bi Can do quantumly with Õ(m1/2) calls to an oracle to h. Each call to h consists in estimating the value of tr(Ai ⍴t). To estimate needs to prepare copies of ⍴t, which costs Õ(n1/2r) Overall: Õ(n1/2m1/2r) Sparsify by random sampling

Conclusion and Open Problems We showed quantum computers can speed-up the solution of SPDs One application is to solve the Goesman-Williamson SDP for Max-Cut in time Õ(|V|). Are there more applications? - Can we improve the parameters? Can we find (interesting) instances where there are superpolynomial speed-ups? Quantum speed-up for interior-point methods? Thanks!