Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs
In 1994, something big happened in the foundations of computer science, whose meaning is still debated today… Why exactly was Shors algorithm important? Boosters: Because it means well build QCs! Skeptics: Because it means we wont build QCs! Me: For reasons having nothing to do with building QCs!
Shors algorithm was a hardness result for one of the central computational problems of modern science: Q UANTUM S IMULATION Shors Theorem: Q UANTUM S IMULATION is not solvable efficiently (in polynomial time), unless F ACTORING is also Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik)
Advantages: Based on more generic complexity assumptions than the hardness of F ACTORING Gives evidence that QCs have capabilities outside the entire polynomial hierarchy Requires only a very simple kind of quantum computation: nonadaptive linear optics (testable before Im dead?) Today, a different kind of hardness result for simulating quantum mechanics Disadvantages: Applies to relational problems (problems with many possible outputs) or sampling problems, not decision problems Harder to convince a skeptic that your computer is indeed solving the relevant hard problem Less relevant for the NSA
Example of a PH problem: For all n-bit strings x, does there exist an n-bit string y such that for all n-bit strings z, (x,y,z) holds? Bestiary of Complexity Classes Just as they believe P NP, complexity theorists believe that PH is infinite So if you can show such-and-such is true PH collapses to a finite level, its damn good evidence that such-and-such is false BQP P #P BPP P NP PH F ACTORING P ERMANENT C OUNTING 3SAT X Y Z … How complexity theorists say such-and-such is damn unlikely: If such-and-such is true, then PH collapses to a finite level
Suppose the output distribution of any linear-optics circuit can be efficiently sampled by a classical algorithm. Then the polynomial hierarchy collapses. Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses. Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is (1) #P-hard to approximate, and (2) not too concentrated around 0. Then the output distribution of a linear-optics circuit cant even be approximately sampled efficiently classically, unless the polynomial hierarchy collapses. Our Results If our conjectures hold, then even a noisy linear-optics experiment can sample from a probability distribution that no classical computer can feasibly sample from
BOSONSFERMIONS There are two basic types of particle in the universe… Their transition amplitudes are given respectively by… All I can say is, the bosons got the harder job Particle Physics In One Slide
High-Level Idea Estimating a sum of exponentially many positive or negative numbers: #P-hard Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in PH If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalentthereby placing #P in PH, and collapsing PH
So why arent we done? Because real quantum experiments are subject to noise Would an efficient classical algorithm that simulated a noisy optics experiment still collapse the polynomial hierarchy? Main Result: Yes, assuming two plausible conjectures about permanents of random matrices (the PCC and the PGC) U Particular experiment we have in mind: Take a system of n identical photons with m=O(n 2 ) modes. Put each photon in a known mode, then apply a Haar-random m m unitary transformation U: Then measure which modes have 1 or more photon in them
There exists a polynomial p such that for all n, The Permanent Concentration Conjecture (PCC) Empirically true! Also, we can prove it with determinant in place of permanent
Let X be an n n matrix of independent, N(0,1) complex Gaussian entries. Then approximating Per(X) to within a 1/poly(n) multiplicative error, for a 1-1/poly(n) fraction of X, is a #P-hard problem. The Permanent-of-Gaussians Conjecture (PGC)
Experimental Prospects What would it take to implement the requisite experiment? Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes Reliable single-photon sources Photodetector arrays that can reliably distinguish 0 vs. 1 photon But crucially, no nonlinear optics or postselected measurements! Our Proposal: Concentrate on (say) n=20 photons and m=400 modes, so that classical simulation is nontrivial but not impossible
Summary I often say that Shors algorithm presented us with three choices. Either (1)The laws of physics are exponentially hard to simulate on any computer today, (2)Textbook quantum mechanics is false, or (3)Quantum computers are easy to simulate classically. For all intents and purposes?