The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs.

Slides:



Advertisements
Similar presentations
Closed Timelike Curves Make Quantum and Classical Computing Equivalent
Advertisements

Scott Aaronson Alex Arkhipov MIT
BosonSampling Scott Aaronson (MIT) Talk at BBN, October 30, 2013.
Quantum Lower Bounds The Polynomial and Adversary Methods Scott Aaronson September 14, 2001 Prelim Exam Talk.
How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
The Learnability of Quantum States Scott Aaronson University of Waterloo.
Hawking Quantum Wares at the Classical Complexity Bazaar Scott Aaronson (MIT)
SPEED LIMIT n Quantum Lower Bounds Scott Aaronson (UC Berkeley) August 29, 2002.
)New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers( סקוט אהרונסון )Scott Aaronson( MIT עדויות חדשות שקשה לדמות את מכניקת הקוונטים
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
How Much Information Is In A Quantum State? Scott Aaronson MIT |
Quantum Double Feature Scott Aaronson (MIT) The Learnability of Quantum States Quantum Software Copy-Protection.
An Invitation to Quantum Complexity Theory The Study of What We Cant Do With Computers We Dont Have Scott Aaronson (MIT) QIP08, New Delhi BQP NP- complete.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Scott Aaronson Institut pour l'Étude Avançée Le Principe de la Postselection.
QMA/qpoly PSPACE/poly: De-Merlinizing Quantum Protocols Scott Aaronson University of Waterloo.
Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013.
The Equivalence of Sampling and Searching Scott Aaronson MIT.
Scott Aaronson (MIT) BQP and PH A tale of two strong-willed complexity classes… A 16-year-old quest to find an oracle that separates them… A solution at.
Quantum Computing with Noninteracting Bosons
From EPR to BQP Quantum Computing as 21 st -Century Bell Inequality Violation Scott Aaronson (MIT)
New Computational Insights from Quantum Optics Scott Aaronson.
Scott Aaronson Associate Professor, EECS Quantum Computers and Beyond.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
The Computational Complexity of Linear Optics Scott Aaronson (MIT) Joint work with Alex Arkhipov vs.
Quantum Computing and the Limits of the Efficiently Computable
Scott Aaronson (MIT) Based on joint work with John Watrous (U. Waterloo) BQP PSPACE Quantum Computing With Closed Timelike Curves.
Scott Aaronson (MIT) The Limits of Computation: Quantum Computers and Beyond.
New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov.
University of Queensland
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
Scott Aaronson (MIT) Forrelation A problem admitting enormous quantum speedup, which I and others have studied under various names over the years, which.
March 11, 2015CS21 Lecture 271 CS21 Decidability and Tractability Lecture 27 March 11, 2015.
1 Recap (I) n -qubit quantum state: 2 n -dimensional unit vector Unitary op: 2 n  2 n linear operation U such that U † U = I (where U † denotes the conjugate.
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
Exploring the Limits of the Efficiently Computable Research Directions I Like In Complexity and Physics Scott Aaronson (MIT) Papers and slides at
One Complexity Theorist’s View of Quantum Computing Lance Fortnow NEC Research Institute.
Quantum Computation for Dummies Dan Simon Microsoft Research UW students.
The Road to Quantum Computing: Boson Sampling Nate Kinsey ECE 695 Quantum Photonics Spring 2014.
BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.
Quantum Computing MAS 725 Hartmut Klauck NTU
Scott Aaronson (MIT) ThinkQ conference, IBM, Dec. 2, 2015 The Largest Possible Quantum Speedups H H H H H H f |0  g H H H.
A SEMINAR ON Q UANTUM C OMPUTING Institute Of Engineering & Management (CSE 3 rd year batch) Submitted to- Submitted by- Mr. D.P.S. Rathor Sudhir.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 667 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 653 Lecture.
Barriers in Quantum Computing (And How to Smash Them Through Closer Interactions Between Classical and Quantum CS) Day Classical complexity theorists,
Verification of BosonSampling Devices Scott Aaronson (MIT) Talk at Simons Institute, February 28, 2014.
The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012.
Scott Aaronson (MIT  UT Austin) Strachey Lecture, Oxford University May 24, 2016 Quantum Supremacy.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT  UT Austin) NYSC, West Virginia, June 24, 2016.
Scott Aaronson (MIT) April 30, 2014
Scott Aaronson (UT Austin)
BosonSampling Scott Aaronson (University of Texas, Austin)
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin)
Scott Aaronson (MIT) QIP08, New Delhi
Ψ WHITFIELD GROUP Ψ WHITFIELD GROUP
Three Questions About Quantum Computing
Scott Aaronson (MIT) Talk at SITP, February 21, 2014
Based on joint work with Alex Arkhipov
Scott Aaronson (UT Austin)
Three Questions About Quantum Computing
BosonSampling Scott Aaronson (University of Texas, Austin)
Quantum Computing and the Quest for Quantum Computational Supremacy
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin) Papers and slides at
Presentation transcript:

The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs

Shors Theorem: Q UANTUM S IMULATION has no efficient classical algorithm, unless F ACTORING does also The Extended Church- Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine

So the ECT is false … what more evidence could anyone want? Building a QC able to factor large numbers is damn hard! After 16 years, no fundamental obstacle has been found (or even seriously proposed), but who knows? Cant we meet the physicists halfway, and show computational hardness for quantum systems closer to what they actually work with now? F ACTORING might be in BPP! At any rate, its an extremely special problem Wouldnt it be great to show that if BPP=BQP, then (say) the polynomial hierarchy collapses?

We define a model of computation based on linear optics: n identical photons traveling through a network of poly(n) beamsplitters, phase-shifters, etc., then a measurement of where the photons ended up Crucial point: No entangling interactions between pairs of photons needed! Today: A New Attack on the ECT Our model is contained in BQP, but seems unlikely to be BQP-complete. We dont know if it solves any decision problems that are hard classically. But for sampling and search problems, the situation is completely different…

Theorem 1. Suppose that for every linear-optics network, the probability distribution over measurement outcomes can be sampled in classical polynomial time. Then P #P =BPP NP (so PH collapses) More generally, let O be any oracle that simulates a linear-optics network A, given a description of A and a random string r. Then So even if linear optics can be simulated in BPP PH, that still collapses PH! (New evidence that QCs have capabilities beyond PH, complementing [A10],[FU10]) OK, but isnt the real question the hardness of approximate sampling? After all, experiments are noisy, and not even the linear-optics network itself can sample exactly!

Theorem 2. Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is (1) #P-hard to approximate, and (2) not too concentrated around 0. Let O be any oracle takes as input a description of a linear-optics network A, a random string r, and 0 1/, and that samples from a distribution -close to As in variation distance. Then In other words: if our conjectures hold, then even simulating noisy linear-optics experiments is classically intractable, unless PH collapses

BOSONSFERMIONS There are two basic types of particle in the universe… Their transition amplitudes are given respectively by… All I can say is, the bosons got the harder job Particle Physics In One Slide Indeed, [Valiant 2002, Terhal-DiVincenzo 2002] showed that noninteracting fermion systems can be simulated in BPP. But, confirming Avis joke, well argue that the analogous problem for bosons (such as photons) is much harder…

Linear Optics for Dummies Well be considering a special kind of quantum computer, which is not based on qubits The basis states have the form |S =|s 1,…,s m, where s i is the number of photons in the i th mode Well never create or destroy photons. So if there are n photons, then s 1,…,s m are nonnegative integers summing to n Initial state: |I =|1,…,1,0,……,0 For us, m=poly(n)

You get to apply any m m unitary matrix U If n=1 (i.e., theres only one photon, in a superposition over the m modes), U acts on that photon in the obvious way In general, there are ways to distribute n identical photons into m modes U induces an M M unitary (U) on the n-photon states as follows: Here U S,T is an n n submatrix of U (possibly with repeated rows and columns), obtained by taking s i copies of the i th row of U and t j copies of the j th column for all i,j

U Example: The Hong-Ou-Mandel Dip Suppose Then Pr[the two photons land in different modes] is Pr[they both land in the first mode] is

Beautiful Alternate Perspective The state of our computer, at any time, is a degree-n polynomial over the variables x=(x 1,…,x m ) (n<<m) Initial state: p(x) := x 1 x n We can apply any m m unitary transformation U to x, to obtain a new degree-n polynomial Then on measuring, we see the monomial with probability

OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix A C n n, we can construct an m m unitary U (where m 2n) as follows: Suppose we start with |I =|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is

Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the P ERMANENT is #P-complete. Can use a classical reduction to go from a multiplicative approximation of |Per(A)| 2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for linear-optics sampling. Then we could estimate p in BPP NP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose we had a fast classical algorithm for linear-optics sampling. Then P #P =BPP NP.

As I said before, I find this result unsatisfying, since it only talks about the classical hardness of exactly sampling the distribution over photon numbers Difficulty: The sampler might adversarially refuse to output the one submatrix whose permanent we care about! That changes the output distribution by only exp(-n), so we still have an excellent sampler … but we can no longer use it to estimate |Per(A)| 2 in BPP NP What about sampling a distribution thats 1/poly(n)- close in variation distance? To get around this difficulty, it seems we need to smuggle in the matrix A that we about as a random submatrix of U

U Consider applying a Haar-random m m unitary matrix U, to n photons in m=poly(n) modes: Main Result Suppose theres a classical algorithm to sample a distribution -close to D U in poly(n,1/ ) time. Then for all, 1/poly(n), theres also a BPP NP algorithm to estimate |Per(X)| 2 to within additive error n!, with probability 1- over a Gaussian random matrix Distribution D U over photon numbers Main technical lemma used in proof: Let m n 6. Then an n n submatrix of an m m Haar unitary matrix is Õ(1/n)-close in variation distance to a matrix of independent Gaussians.

So the question boils down to this: how hard is it to additively estimate |Per(X)| 2, with high probability over a Gaussian random matrix We conjecture that its #P-hardin which case, even approximate classical simulation of our linear-optics experiment would imply P #P =BPP NP We can decompose this conjecture into two plausible sub-conjectures: that multiplicatively estimating Per(X) is #P-hard for Gaussian X, and that Per(X) is not too concentrated around 0

The following problem is #P-hard. Given a matrix X C n n of i.i.d. Gaussian entries, together with 0 1/ and 0 1/, output an approximation z such that The Permanent-of-Gaussians Conjecture (PGC) We can prove #P-hardness if =0 or =0. So what makes the PGC nontrivial is really the combination of average-case with approximation

There exist constants C,D and >0 such that for all n and >0, The Permanent Anti-Concentration Conjecture (PACC) Empirically true! Also, we can prove it with determinant in place of permanent

Experimental Prospects It seems well within current technology to do our experiment with (say) n=4 photons and m=20 modes (Current record: n=2 photons) If you can scale to n photons and error in variation distance, using poly(n,1/ ) experimental effort, then modulo our complexity conjectures, the ECT is false What would it take to scale to (say) n=20 photons and m=500 modes? - Reliable single-photon sources (standard laser isnt good enough!) - Reliable photodetector arrays - Stable apparatus to ensure that w.h.p., all n photons arrive at photodetector arrays at exactly the same time Physicists we consulted: Sounds hard! But not as hard as building a universal QC Remark: No point in scaling this experiment much beyond 20 or 30 photons, since then a classical computer cant even verify the answers!

Open Problems Similar hardness results for other natural quantum systems (besides linear optics)? Bremner, Jozsa, Shepherd 2010: Another system for which exact classical simulation would collapse PH Can our linear-optics model solve classically-intractable decision problems? What about problems for which a classical computer can verify the answers? Do BPP=BQP or PromiseBPP=PromiseBQP have interesting structural complexity consequences? Prove the PGC ($200) and PACC ($100)!