Black Holes, Firewalls, and the Limits of Quantum Computers Scott Aaronson (UT Austin) Simons Theoretically Speaking Series, Oct. 18, 2017 Papers and slides at www.scottaaronson.com
Things we never see… Warp drive Perpetuum mobile Übercomputer GOLDBACH CONJECTURE: TRUE NEXT QUESTION Warp drive Perpetuum mobile Übercomputer The (seeming) impossibility of the first two machines reflects fundamental principles of physics—Special Relativity and the Second Law respectively So what about the third one? The starting point for this talk is, there are certain technologies we never see that would be REALLY cool if we had them. The first is warp drive. For this crowd especially, I can say: what’s taking you so long? The second is perpetual-motion machines. The third is what I like to call the Ubercomputer. This is a machine where you feed it any well-posed mathematical question and it instantly tells you the answer. Currently, even with the fastest computers today, if you ask them to prove a hard theorem, they could do it eventually, but it might take longer than the age of the universe. That’s why there are still human mathematicians. In this talk, I want to convince you that the impossibility of ubercomputers is also something physicists should think about, and also something that may have implications for physics.
But Turing machines have fundamental limits—even more so, if you need the answer in a reasonable amount of time! P: Polynomial Time Class of all “decision problems” (infinite sets of yes-or-no questions) solvable by a Turing machine, using a number of steps that scales at most like the size of the question raised to some fixed power Example: Is it mathematically possible to get between Berkeley and Palo Alto? NP stands for Nondeterministic Polynomial-Time. See, I envy the physicists, because even if laypeople don’t understand what you’re doing, at least you have awesome names, like quark, supersymmetry, black hole. We computer scientists, we’re stuck with P and NP. But it really is just as interesting! NP is the class of problems where if someone tells you the answer is “yes,” there’s a short proof of that, which you can check in polynomial time. A famous example is factoring an enormous number. Let’s say I ask you if this number here has a factor ending in 7. It might take an astronomical amount of time to solve the problem yourself, but if someone TOLD you the factors, you could check them efficiently, and say, “well, I suppose it DID have a factor ending in 7.”
NP: Nondeterministic Polynomial Time Class of all decision problems for which a “yes” answer can be verified in polynomial time, if you’re given a witness or proof for it Example: Does 37976595177176695379702491479374117272627593301950462688996367493665078453699421776635920409229841590432339850906962896040417072096197880513650802416494821602885927126968629464313047353426395204881920475456129163305093846968119683912232405433688051567862303785337149184281196967743805800830815442679903720933 NP stands for Nondeterministic Polynomial-Time. See, I envy the physicists, because even if laypeople don’t understand what you’re doing, at least you have awesome names, like quark, supersymmetry, black hole. We computer scientists, we’re stuck with P and NP. But it really is just as interesting! NP is the class of problems where if someone tells you the answer is “yes,” there’s a short proof of that, which you can check in polynomial time. A famous example is factoring an enormous number. Let’s say I ask you if this number here has a factor ending in 7. It might take an astronomical amount of time to solve the problem yourself, but if someone TOLD you the factors, you could check them efficiently, and say, “well, I suppose it DID have a factor ending in 7.” have a divisor ending in 7?
NP-hard: If you can solve it, then you can solve every NP problem NP-complete: NP-hard and in NP Example: Is there a tour that visits each city once? NP-hard problems are essentially problems where if you could solve them, then you could solve EVERY NP problem. A problem is NP-complete if it’s both NP-hard and in NP. So if you like, NP-complete problems are hardest problems for which the answer is easy to check. A great from the 70s is that a HUGE number of practical problems actually have this property of being NP-complete. One famous example is what’s called the Hamilton cycle problem, where I give you a map, and I ask, is there a tour that visits each city once and then returns to the starting point? In this particular case, there is one.
The (literally) $1,000,000 question Does P=NP? The (literally) $1,000,000 question If there actually were a machine with [running time] ~Kn (or even only with ~Kn2), this would have consequences of the greatest magnitude. —Gödel to von Neumann, 1956 The big question is whether P=NP. Literally a million dollar question – if you solve it, you get a million dollars from the Clay Math Institute. In my opinion, it’s the most important of all 7 Clay problems – since if P=NP, then probably you could not only solve that one problem, but also the other six. For you would simply program your computer to find the proofs for you. I should mention, because of this blog I write, I get claims to solve the P vs. NP problem in my inbox every other week or so. The most recent *relatively-serious* claim was this summer, when this guy Vinay Deolalikar got all over the news claiming to have proved P!=NP. I was on vacation, but eventually it got to the point where I said, listen, if he’s right, I’ll supplement his million-dollar prize by $200,000. I took a lot of flak for that, but in case you’re wondering, the end result was I didn’t have to pay. This is still an open problem, one of the hardest and most profound open problems in mathematics.
Most computer scientists believe that PNP But if so, there’s a further question: is there any way to solve NP-complete problems in polynomial time, consistent with the laws of physics?
Old proposal: Dip two glass plates with pegs between them into soapy water. Let the soap bubbles form a minimum Steiner tree connecting the pegs—thereby solving a known NP-hard problem “instantaneously”
Relativity Computer DONE But while we’re waiting for scalable quantum computers, we can also base computers on that other great theory of the 20th century, relativity! The idea here is simple: you start your computer working on some really hard problem, and leave it on earth. Then you get on a spaceship and accelerate to close to the speed of light. When you get back to earth, billions of years have passed on Earth and all your friends are long dead, but at least you’ve got the answer to your computational problem. I don’t know why more people don’t try it!
STEP 1 Zeno’s Computer STEP 2 Time (seconds) STEP 3 STEP 4 Another of my favorites is Zeno’s computer. The idea here is also simple: this is a computer that would execute the first step in one second, the next step in half a second, the next in a quarter second, and so on, so that after two seconds it’s done an infinite amount of computation. Incidentally, do any of you know why that WOULDN’T work? The problem is that, once you get down to the Planck time of 10^{-43} seconds, you’d need so much energy to run your computer that fast that, according to our best current theories, you’d exceed what’s called the Schwarzschild radius, and your computer would collapse to a black hole. You don’t want that to happen. STEP 3 STEP 4 STEP 5
Ah, but what about quantum computing? (you knew it was coming) Quantum mechanics: “Probability theory with minus signs” (Nature seems to prefer it that way)
The Famous Double-Slit Experiment Another of my favorites is Zeno’s computer. The idea here is also simple: this is a computer that would execute the first step in one second, the next step in half a second, the next in a quarter second, and so on, so that after two seconds it’s done an infinite amount of computation. Incidentally, do any of you know why that WOULDN’T work? The problem is that, once you get down to the Planck time of 10^{-43} seconds, you’d need so much energy to run your computer that fast that, according to our best current theories, you’d exceed what’s called the Schwarzschild radius, and your computer would collapse to a black hole. You don’t want that to happen. Probability of landing in “dark patch” = |amplitude|2 = |amplitudeSlit1 + amplitudeSlit2|2 = 0 Yet if you close one of the slits, the photon can appear in that previously dark patch!
A bit more precisely: the key claim of quantum mechanics is that, if an object can be in two distinguishable states, call them |0 or |1, then it can also be in a superposition a|0 + b|1 Here a and b are complex numbers called amplitudes satisfying |a|2+|b|2=1 If we observe, we see |0 with probability |a|2 |1 with probability |b|2 Also, the object collapses to whichever outcome we see
Quantum Computing Interesting A general entangled state of n qubits requires ~2n amplitudes to specify: Where we are: A QC has now factored 21 into 37, with high probability (Martín-López et al. 2012) Scaling up is hard, because of decoherence! But unless QM is wrong, there doesn’t seem to be any fundamental obstacle Presents an obvious practical problem when using conventional computers to simulate quantum mechanics Interesting As zany as this sounds, Deutsch’s speculations are part of what gave rise to the modern field of quantum computing. So, what’s the idea of quantum computing? Well, a general entangled state of n qubits requires 2^n amplitudes to specify, since you need to give an amplitude for every configuration of all n of the bits. That’s a staggering amount of information! It suggests that Nature, off to the side somewhere, needs to write down 2^1000 numbers just to keep track of 1000 particles. And that presents an obvious practical problem when people try to use conventional computers to SIMULATE quantum mechanics – they have all sorts of approximate techniques, but even then, something like 10% of supercomputer cycles today are used, basically, for simulating quantum mechanics. In 1981, Richard Feynman said, if Nature is going to all this work, then why not turn it around, and build computers that THEMSELVES exploit superposition? What would such computers be useful for? Well, at least one thing: simulating quantum physics! As tautological as that sounds, I predict that if QCs ever become practical, simulating quantum physics will actually be the main thing that they’re used for. That actually has *tremendous* applications to materials science, drug design, understanding high-temperature superconductivity, etc. But of course, what got everyone excited about this field was Peter Shor’s discovery, in 1994, that a quantum computer would be good for MORE than just simulating quantum physics. It could also be used to factor integers in polynomial time, and thereby break almost all of the public-key cryptography currently used on the Internet. (Interesting!) Where we are: After 18 years and more than a billion dollars, I’m proud to say that a quantum computer recently factored 21 into 3*7, with high probability. (For a long time, it was only 15.) Scaling up is incredibly hard because of decoherence – the external environment, as it were, constantly trying to measure the QC’s state and collapse it down to classical. With classical computers, it took more than 100 years from Charles Babbage until the invention of the transistor. Who knows how long it will take in this case? But unless quantum mechanics itself is wrong, there doesn’t seem to be any fundamental obstacle to scaling this up. On the contrary, we now know that, IF the decoherence can be kept below some finite but nonzero level, then there are very clever error-correcting codes that can render its remaining effects insignificant. So, I’m optimistic that if civilization lasts long enough, we’ll eventually have practical quantum computers. Feynman 1981: So then why not turn things around, and build computers that themselves exploit superposition? Shor 1994: Such a computer could do more than simulate QM—e.g., it could factor integers in polynomial time
Factoring is not believed to be NP-complete! And today, we don’t believe quantum computers can solve NP-complete problems in polynomial time in general (though not surprisingly, we can’t prove it) Bennett et al. 1997: “Quantum magic” won’t be enough If you throw away the problem structure, and just consider an abstract “landscape” of 2n possible solutions, then even a quantum computer needs ~2n/2 steps to find the correct one (That bound is actually achievable, using Grover’s algorithm!) If there’s a fast quantum algorithm for NP-complete problems, it will have to exploit their structure somehow
Operation with easily-prepared lowest energy state The “Adiabatic Optimization” Approach to Solving NP-Hard Problems with a Quantum Computer Hi Hf Operation with easily-prepared lowest energy state Operation whose lowest-energy state encodes solution to NP-hard problem
Problem: “Eigenvalue gap” can be exponentially small Hope: “Quantum tunneling” could give speedups over classical optimization methods for finding local optima Remains unclear whether you can get a practical speedup this way over the best classical algorithms. We might just have to build QCs and test it! Problem: “Eigenvalue gap” can be exponentially small
“Quantum Supremacy” Getting a clear quantum speedup for some task—not necessarily a useful one BosonSampling (with Alex Arkhipov): A proposal for a simple optical quantum computer to sample a distribution that (we think) can’t be sampled efficiently classically Experimentally demonstrated with 6 photons by group at Bristol Random Quantum Circuit Sampling: Martinis group at Google is building a system with 49 high-quality superconducting qubits this year. Lijie Chen and I studied the hardness of sampling its output distribution
Hawking 1970s: What happens to quantum information dropped into a black hole? | Stays in black hole forever Violates quantum mechanics Comes out in Hawking radiation if there’s also a copy inside the black hole, seems to violate the “No-Cloning Theorem” Complementarity (modern view): Inside is just a “re-encoding” of exterior, so no cloning is needed to have | in both places
The Firewall Paradox (Almheiri et al The Firewall Paradox (Almheiri et al. 2012): Refinement of Hawking’s information paradox that challenges complementarity If the black hole interior is “built” out of the same qubits coming out as Hawking radiation, then why can’t we do something to those Hawking qubits, then dive into the black hole, and see that we’ve completely destroyed the spacetime geometry in the interior? Entanglement among Hawking photons detected!
Harlow-Hayden (2013): Argued that the requisite computation would take exponential time (~210^70 years) even for a QC—by which time the black hole has already fully evaporated! Why? Because one can reduce the problem of finding collisions in a cryptographic hash function, to the problem of decoding the Hawking radiation. And I showed in my Berkeley PhD thesis that, in the “black-box setting,” the former takes exponential time for a quantum computer! Recently, I strengthened Harlow and Hayden’s argument, to show that performing the computation is generically at least as hard as inverting any injective one-way function with a quantum computer
More Ways Computational Complexity Interacts with Physics A. 2017: If you had the technological capability to verify a Schrödinger cat state, then you’d also necessarily (i.e., with a similar-sized quantum circuit) have the capability to bring a dead cat back to life Susskind 2013, many others: Quantum circuit complexity as a “dual” of wormhole volume in the AdS/CFT correspondence
Summary Quantum computers are the most powerful kind of computer allowed by the currently-known laws of physics There’s a realistic prospect of building them Even quantum computers would have nontrivial limits—which might be the limits of what’s efficiently computable in reality But those limits might help protect the geometry of spacetime!