Based on joint work with Alex Arkhipov

Slides:



Advertisements
Similar presentations
Scott Aaronson Alex Arkhipov MIT
Advertisements

BosonSampling Scott Aaronson (MIT) Talk at BBN, October 30, 2013.
How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
Quantum Versus Classical Proofs and Advice Scott Aaronson Waterloo MIT Greg Kuperberg UC Davis | x {0,1} n ?
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Hawking Quantum Wares at the Classical Complexity Bazaar Scott Aaronson (MIT)
The Future (and Past) of Quantum Lower Bounds by Polynomials Scott Aaronson UC Berkeley.
)New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers( סקוט אהרונסון )Scott Aaronson( MIT עדויות חדשות שקשה לדמות את מכניקת הקוונטים
Multilinear Formulas and Skepticism of Quantum Computing Scott Aaronson UC Berkeley IAS.
An Invitation to Quantum Complexity Theory The Study of What We Cant Do With Computers We Dont Have Scott Aaronson (MIT) QIP08, New Delhi BQP NP- complete.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
How to Solve Longstanding Open Problems In Quantum Computing Using Only Fourier Analysis Scott Aaronson (MIT) For those who hate quantum: The open problems.
Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013.
The Equivalence of Sampling and Searching Scott Aaronson MIT.
The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs.
Quantum Computing with Noninteracting Bosons
From EPR to BQP Quantum Computing as 21 st -Century Bell Inequality Violation Scott Aaronson (MIT)
New Computational Insights from Quantum Optics Scott Aaronson.
Scott Aaronson Associate Professor, EECS Quantum Computers and Beyond.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
The Computational Complexity of Linear Optics Scott Aaronson (MIT) Joint work with Alex Arkhipov vs.
Quantum Computing and the Limits of the Efficiently Computable
Scott Aaronson (MIT) The Limits of Computation: Quantum Computers and Beyond.
New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov.
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
Scott Aaronson (MIT) Forrelation A problem admitting enormous quantum speedup, which I and others have studied under various names over the years, which.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Exploring the Limits of the Efficiently Computable Research Directions I Like In Complexity and Physics Scott Aaronson (MIT) Papers and slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT)
Exploring the Limits of the Efficiently Computable Research Directions in Computational Complexity and Physics That I Find Exciting Scott Aaronson (MIT)
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
The Road to Quantum Computing: Boson Sampling Nate Kinsey ECE 695 Quantum Photonics Spring 2014.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.
Sampling Distribution Models Chapter 18. Toss a penny 20 times and record the number of heads. Calculate the proportion of heads & mark it on the dot.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Scott Aaronson (MIT) ThinkQ conference, IBM, Dec. 2, 2015 The Largest Possible Quantum Speedups H H H H H H f |0  g H H H.
Barriers in Quantum Computing (And How to Smash Them Through Closer Interactions Between Classical and Quantum CS) Day Classical complexity theorists,
Verification of BosonSampling Devices Scott Aaronson (MIT) Talk at Simons Institute, February 28, 2014.
The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012.
Scott Aaronson (MIT  UT Austin) Strachey Lecture, Oxford University May 24, 2016 Quantum Supremacy.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT  UT Austin) NYSC, West Virginia, June 24, 2016.
Scott Aaronson (UT Austin) Banff, September 8, 2016 Joint work with Lijie Chen Complexity-Theoretic Foundations of Quantum Supremacy Experiments QSamp.
Estimating standard error using bootstrap
The NP class. NP-completeness
P & NP.
Scott Aaronson (MIT) April 30, 2014
Scott Aaronson (UT Austin)
BosonSampling Scott Aaronson (University of Texas, Austin)
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin)
Scott Aaronson (MIT) QIP08, New Delhi
Scott Aaronson (MITUT Austin)
Scott Aaronson (MIT) Talk at SITP, February 21, 2014
Scott Aaronson (UT Austin)
A Ridiculously Brief Overview
BosonSampling Scott Aaronson (University of Texas, Austin)
3rd Lecture: QMA & The local Hamiltonian problem (CNT’D)
Halting Problem.
Quantum Computing and the Quest for Quantum Computational Supremacy
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Quantum Computation – towards quantum circuits and algorithms
Presentation transcript:

Based on joint work with Alex Arkhipov BosonSampling Scott Aaronson (MIT) Based on joint work with Alex Arkhipov November 20, 2014 arXiv:1011.3245

The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine Shor’s Theorem: Quantum Simulation has no efficient classical algorithm, unless Factoring does also

So the ECT is false … what more evidence could anyone want? Building a QC able to factor large numbers is damn hard! After 20 years, no fundamental obstacle has been found, but who knows? Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now? Factoring might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?

BosonSampling (A.-Arkhipov 2011) A rudimentary type of quantum computing, involving only non-interacting photons Classical counterpart: Galton’s Board Replacing the balls by photons leads to famously counterintuitive phenomena, like the Hong-Ou-Mandel dip

nn submatrix of A corresponding to S In general, we consider a network of beamsplitters, with n input “modes” (locations) and m>>n output modes n identical photons enter, one per input mode Assume for simplicity they all leave in different modes—there are possibilities The beamsplitter network defines a column-orthonormal matrix ACmn, such that nn submatrix of A corresponding to S where is the matrix permanent

Example For Hong-Ou-Mandel experiment, In general, an nn complex permanent is a sum of n! terms, almost all of which cancel How hard is it to estimate the “tiny residue” left over? Answer (Valiant 1979): #P-complete (meaning: as hard as any combinatorial counting problem) Contrast with nonnegative permanents!

So, Can We Use Quantum Optics to Solve a #P-Complete Problem? That sounds way too good to be true… Explanation: If X is sub-unitary, then |Per(X)|2 will usually be exponentially small. So to get a reasonable estimate of |Per(X)|2 for a given X, we’d generally need to repeat the optical experiment exponentially many times

Better idea: Given ACmn as input, let BosonSampling be the problem of merely sampling from the same distribution DA that the beamsplitter network samples from—the one defined by Pr[S]=|Per(AS)|2 Theorem (A.-Arkhipov 2011): Suppose BosonSampling is solvable in classical polynomial time. Then P#P=BPPNP Upshot: Compared to (say) Shor’s factoring algorithm, we get different/stronger evidence that a weaker system can do something classically hard Better Theorem: Suppose we can sample DA even approximately in classical polynomial time. Then in BPPNP, it’s possible to estimate Per(X), with high probability over a Gaussian random matrix We conjecture that the above problem is already #P-complete. If it is, then even a fast classical algorithm for approximate BosonSampling would have the consequence that P#P=BPPNP

Related Work Valiant 2001, Terhal-DiVincenzo 2002, “folklore”: A QC built of noninteracting fermions can be efficiently simulated by a classical computer Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurements yield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(X) for nonnegative X Gurvits 2002: O(n2/2) classical randomized algorithm to approximate an n-photon amplitude to ± additive error (also, to compute k-mode marginal distribution in nO(k) time)

OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix XCnn, we can construct an mm unitary U (where m2n) as follows: Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is

Claim 1: p is #P-complete to estimate (up to a constant factor) This follows from Valiant’s famous result. Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPPNP—that is, using a randomized algorithm with an oracle for NP-complete problems This follows from a classical result of Goldwasser-Sipser Conclusion: Suppose we had a fast classical algorithm for boson sampling. Then P#P=BPPNP.

Unfortunately, this argument hinged on the hardness of estimating a single, exponentially-small probability p. As such, it’s not robust to realistic experimental error. Showing that a noisy BosonSampling device still samples a classically-intractable distribution is a much more complicated problem. As mentioned, we can do it, but only under an additional assumption (that estimating Gaussian permanents is #P-complete) A first step toward proving that conjecture, would simply be to understand the distribution of |Per(X)|2 for Gaussian X. Is it (as we conjecture) approximately lognormal?

BosonSampling Experiments In 2012, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents # of experiments > # of photons!

Obvious Challenges for Scaling Up: Reliable single-photon sources (optical multiplexing?) Minimizing losses Getting high probability of n-photon coincidence Goal (in our view): Scale to 10-30 photons Don’t want to scale much beyond that—both because you probably can’t without fault-tolerance, and a classical computer probably couldn’t even verify the results!

Scattershot BosonSampling Exciting recent idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance! Complexity analysis turns out to go through essentially without change

Polynomial-Time Verification of BosonSampling Devices? Idea 1: Let AS be the nn submatrix of A corresponding to output S. Let PS be the product of squared 2-norms of AS’s rows. Check whether the observed distribution over PS is consistent with BosonSampling P under uniform distribution (a lognormal random variable) P under a BosonSampling distribution Idea 2: Let the scattering matrix U be a discrete Fourier transform. Then because of cancellations in the permanent, a ~1/n fraction of outcomes S should have probability 0. Check that these never occur.

Using Quantum Optics to Prove that the Permanent is #P-Complete [A Using Quantum Optics to Prove that the Permanent is #P-Complete [A., Proc. Roy. Soc. 2011] Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets We gave a new, arguably more transparent proof by combining three facts: n-photon amplitudes correspond to nn permanents (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] (3) Quantum computations can encode #P-complete quantities in their amplitudes

Open Problems Prove that Gaussian permanent approximation is #P-hard (first step: understand distribution of Gaussian permanents) Can the BosonSampling model solve classically-hard decision problems? With verifiable answers? Can one efficiently sample a distribution that can’t be efficiently distinguished from BosonSampling? Similar hardness results for other natural quantum systems (besides linear optics)? Bremner, Jozsa, Shepherd 2010: Another system for which exact classical simulation would collapse the polynomial hierarchy