BosonSampling Scott Aaronson (University of Texas, Austin)

Slides:



Advertisements
Similar presentations
Scott Aaronson Alex Arkhipov MIT
Advertisements

BosonSampling Scott Aaronson (MIT) Talk at BBN, October 30, 2013.
How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Hawking Quantum Wares at the Classical Complexity Bazaar Scott Aaronson (MIT)
The Future (and Past) of Quantum Lower Bounds by Polynomials Scott Aaronson UC Berkeley.
)New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers( סקוט אהרונסון )Scott Aaronson( MIT עדויות חדשות שקשה לדמות את מכניקת הקוונטים
Multilinear Formulas and Skepticism of Quantum Computing Scott Aaronson UC Berkeley IAS.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Computational Complexity and Physics Scott Aaronson (MIT) New Insights Into Computational Intractability Oxford University, October 3, 2013.
The Equivalence of Sampling and Searching Scott Aaronson MIT.
The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs.
Quantum Computing with Noninteracting Bosons
From EPR to BQP Quantum Computing as 21 st -Century Bell Inequality Violation Scott Aaronson (MIT)
New Computational Insights from Quantum Optics Scott Aaronson.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
The Computational Complexity of Linear Optics Scott Aaronson (MIT) Joint work with Alex Arkhipov vs.
Quantum Computing and the Limits of the Efficiently Computable
Scott Aaronson (MIT) The Limits of Computation: Quantum Computers and Beyond.
New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov.
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
March 11, 2015CS21 Lecture 271 CS21 Decidability and Tractability Lecture 27 March 11, 2015.
Space complexity [AB 4]. 2 Input/Work/Output TM Output.
Exploring the Limits of the Efficiently Computable Research Directions I Like In Complexity and Physics Scott Aaronson (MIT) Papers and slides at
The Road to Quantum Computing: Boson Sampling Nate Kinsey ECE 695 Quantum Photonics Spring 2014.
BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.
Sampling Distribution Models Chapter 18. Toss a penny 20 times and record the number of heads. Calculate the proportion of heads & mark it on the dot.
Scott Aaronson (MIT) ThinkQ conference, IBM, Dec. 2, 2015 The Largest Possible Quantum Speedups H H H H H H f |0  g H H H.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Barriers in Quantum Computing (And How to Smash Them Through Closer Interactions Between Classical and Quantum CS) Day Classical complexity theorists,
Verification of BosonSampling Devices Scott Aaronson (MIT) Talk at Simons Institute, February 28, 2014.
The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Scott Aaronson (MIT  UT Austin) Strachey Lecture, Oxford University May 24, 2016 Quantum Supremacy.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
The NP class. NP-completeness
P & NP.
Scott Aaronson (MIT) April 30, 2014
Scott Aaronson (UT Austin)
Sampling Distribution Models
Computational problems, algorithms, runtime, hardness
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin)
Scott Aaronson (MITUT Austin)
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
NP-Completeness Yin Tat Lee
Three Questions About Quantum Computing
Scott Aaronson (MIT) Talk at SITP, February 21, 2014
Based on joint work with Alex Arkhipov
Hidden Markov Models Part 2: Algorithms
Scott Aaronson (UT Austin)
Three Questions About Quantum Computing
A Ridiculously Brief Overview
BosonSampling Scott Aaronson (University of Texas, Austin)
Chapter 5 Sampling Distributions
This Lecture Substitution model
Quantum Computing and the Quest for Quantum Computational Supremacy
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Bayesian Deep Learning on a Quantum Computer
CPS 173 Computational problems, algorithms, runtime, hardness
Ensemble learning.
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
NP-Completeness Yin Tat Lee
Interactive Proofs Adapted from Oded Goldreich’s course lecture notes.
Scott Aaronson (UT Austin) Papers and slides at
This Lecture Substitution model
Presentation transcript:

BosonSampling Scott Aaronson (University of Texas, Austin) Conference on Integrated Quantum Photonics Rome, September 26, 2017 Based mostly on joint work with Alex Arkhipov

The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a (probabilistic) Turing machine Shor’s Theorem: Quantum Simulation has no efficient classical algorithm, unless Factoring does also

So the ECT is false … what more evidence could anyone want? Building a QC able to factor large numbers is hard! After 23 years, no fundamental obstacle has been found, but who knows? Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now? Factoring might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?

Our Starting Point In P #P-complete [Valiant] BOSONS FERMIONS Seems unfair that the bosons got the harder job In P #P-complete [Valiant] BOSONS FERMIONS

Can We Use Bosons to Calculate the Permanent? So if n-boson amplitudes correspond to permanents… Can We Use Bosons to Calculate the Permanent? That sounds way too good to be true—it would let us solve NP-complete problems and more using QC! Explanation: Amplitudes aren’t directly observable. To get a reasonable estimate of Per(A), you might need to repeat the experiment exponentially many times

Then P#P=BPPNP and the polynomial hierarchy collapses.   Basic Result: Suppose there were a polynomial-time classical randomized algorithm that took as input a description of a noninteracting-boson experiment, and that output a sample from the correct final distribution over n-boson states. Then P#P=BPPNP and the polynomial hierarchy collapses. Motivation: Compared to (say) Shor’s algorithm, we get “stronger” evidence that a “weaker” system can do interesting quantum computations

Related Work Valiant 2001, Terhal-DiVincenzo 2002, “folklore”: A QC built of noninteracting fermions can be efficiently simulated by a classical computer Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurements yield universal QC Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(A) for nonnegative A Bremner-Jozsa-Shepherd 2011 (independent of us): Analogous hardness results for simulating “commuting Hamiltonian” quantum computers

The Quantum Optics Model A rudimentary subset of quantum computing, involving only non-interacting bosons, and not based on qubits Classical counterpart: Galton’s Board, on display at many science museums Using only pegs and non-interacting balls, you probably can’t build a universal computer—but you can do some interesting computations, like generating the binomial distribution!

The Quantum Version Let’s replace the balls by identical single photons, and the pegs by beamsplitters Then we see strange things like the Hong-Ou-Mandel dip The two photons are now correlated, even though they never interacted! Explanation involves destructive interference of amplitudes: Final amplitude of non-collision is

Getting Formal The basis states have the form |S=|s1,…,sm, where si is the number of photons in the ith “mode” We’ll never create or destroy photons. So s1+…+sm=n is constant. For us, m=nO(1) U Initial state: |I=|1,…,1,0,……,0 

You get to apply any mm unitary matrix U—say, using a collection of 2-mode beamsplitters In general, there are ways to distribute n identical photons into m modes U induces an MM unitary (U) on the n-photon states as follows: Here US,T is an nn submatrix of U (possibly with repeated rows and columns), obtained by taking si copies of the ith row of U and tj copies of the jth column for all i,j

Beautiful Alternate Perspective The “state” of our computer, at any time, is a degree-n polynomial over the variables x=(x1,…,xm) (n<<m) Initial state: p(x) := x1xn We can apply any mm unitary transformation U to x, to obtain a new degree-n polynomial Then on “measuring,” we see the monomial with probability

OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix ACnn, we can construct an mm unitary U (where m2n) as follows: Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is

Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the Permanent is #P-complete. Can use a classical reduction to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPPNP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose we had a fast classical algorithm for boson sampling. Then P#P=BPPNP.

The Elephant in the Room The previous result hinged on the difficulty of estimating a single, exponentially-small probability p—but what about noise and error? The “right” question: can a classical computer efficiently sample a distribution with 1/nO(1) variation distance from the boson distribution? Our Main Result: Suppose it can. Then there’s a BPPNP algorithm to estimate |Per(A)|2, with high probability over a Gaussian matrix

Our Main Conjecture Estimating |Per(A)|2, with high probability over i.i.d. Gaussian A, is a #P-hard problem If this conjecture holds, then even a noisy n-photon experiment could falsify the Extended Church Thesis, assuming P#PBPPNP! Much of our work was devoted to giving evidence for this conjecture What makes the Gaussian ensemble special? Theorem: It arises by considering sufficiently small submatrices of Haar-random unitary matrices.

“Easier” problem: Just show that, if A is an i. i. d “Easier” problem: Just show that, if A is an i.i.d. Gaussian matrix, then |Per(A)|2 is approximately a lognormal random variable (as numerics suggest), and not so concentrated around 0 as to preclude its being hard to estimate Can prove for determinant in place of permanent. For permanent, best known anti-concentration results [Tao-Vu] are not yet strong enough for us Can calculate E[|Per(A)|2]=n! and E[|Per(A)|4]=(n+1)(n!)2, but not strong enough to imply anti-concentration result

BosonSampling Experiments Initial experiments with 3 photons (groups in Rome, Oxford, Vienna, and Brisbane) Carolan et al. 2015: With 6 photons, but initial states of the form |3,3 Wang et al. 2017: With 5 photons, initial states of the form |1,1,1,1,1

Challenges for Scaling Up: Reliable single-photon sources (optical multiplexing?) Minimizing losses Getting high probability of n-photon coincidence Goal (in our view): Scale to 10-30 photons Don’t want to scale much beyond that—both because you probably can’t without fault-tolerance, and a classical computer probably couldn’t even verify the results!

Scattershot BosonSampling Idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance! Complexity analysis turns out to go through essentially without change

Using Quantum Optics to Prove that the Permanent is #P-Complete [A Using Quantum Optics to Prove that the Permanent is #P-Complete [A., Proc. Roy. Soc. 2011] Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets We gave a new, arguably more transparent proof by combining three facts: n-photon amplitudes correspond to nn permanents (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] (3) Quantum computations can encode #P-complete quantities in their amplitudes

Can BosonSampling Solve Non-Sampling Problems Can BosonSampling Solve Non-Sampling Problems? (Could it even have cryptographic applications?) Idea: What if we could “smuggle” a matrix A with huge permanent, as a submatrix of a larger unitary matrix U? Finding A could be hard classically, but shooting photons into an interferometer network would easily reveal it Pessimistic Conjecture: If U is unitary and |Per(U)|1/nO(1), then U is “close” to a permuted diagonal matrix—so it “sticks out like a sore thumb” A.-Nguyen, Israel J. Math 2014: Proof of a weaker version of the pessimistic conjecture, using inverse Littlewood-Offord theory

BosonSampling with Lost Photons Suppose we have n+k photons in the initial state, but k are randomly lost. Then the probability of each output has the form A.-Brod 2016: For any constant number of losses, k=O(1), the above quantities are #P-hard to approximate, assuming |Per(A)|2 itself is. We don’t know what happens for larger k...

Summary Intuition suggests that not merely quantum computers, but many natural quantum systems, should be intractable to simulate on classical computers, because of the exponentiality of the wavefunction BosonSampling provides a clear example of how we can formalize this intuition—or at least, base it on “standard” conjectures in theoretical computer science. It’s also brought QC theory into closer contact with experiment. And it’s highlighted the remarkable connection between bosons and the matrix permanent. Future progress may depend on solving hard open problems about the permanent