Quantum Computing and the Limits of the Efficiently Computable

Slides:



Advertisements
Similar presentations
Multilinear Formulas and Skepticism of Quantum Computing Scott Aaronson UC Berkeley IAS.
Advertisements

An Invitation to Quantum Complexity Theory The Study of What We Cant Do With Computers We Dont Have Scott Aaronson (MIT) QIP08, New Delhi BQP NP- complete.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Scott Aaronson Associate Professor, EECS Quantum Computers and Beyond.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
Quantum Computing and the Limits of the Efficiently Computable
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Scott Aaronson (MIT) The Limits of Computation: Quantum Computers and Beyond.
Copyright © CALTECH SCOTT AARONSON Massachusetts Institute of Technology QUANTUM COMPUTING AND THE LIMITS OF THE EFFICIENTLY COMPUTABLE.
THE QUANTUM COMPLEXITY OF TIME TRAVEL Scott Aaronson (MIT)
CPSC 322 Introduction to Artificial Intelligence October 29, 2004.
Computational Phenomena in Physics Scott Aaronson MIT.
Exploring the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
By: Mike Neumiller & Brian Yarbrough
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT)
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Introduction to Quantum Computing
Sight Words.
Verification of BosonSampling Devices Scott Aaronson (MIT) Talk at Simons Institute, February 28, 2014.
The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT  UT Austin) NYSC, West Virginia, June 24, 2016.
Limits on Efficient Computation in the Physical World
Mr Barton’s Maths Notes
ESSENTIAL WORDS.
Giving Constructive Feedback
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson Computer Science, UT Austin AAAS Meeting, Feb. 19, 2017
Scott Aaronson (MIT) QIP08, New Delhi
Poomipat Phusayangkul
Damned if you do and Damned if you don’t
Introduction to Quantum Computing Lecture 1 of 2
Quantum Computing and the Limits of the Efficiently Computable
Mr F’s Maths Notes Number 7. Percentages.
Scott Aaronson Associate Professor, EECS
Bio Scott Aaronson is David J. Bruton Centennial Professor of Computer Science at the University of Texas at Austin.  He received his bachelor's from Cornell.
Quantum Computing and the Limits of the Efficiently Computable
Scott Aaronson (Computer Science) Explore UT Day March 4, 2017
Grades K-2 Reading High Frequency Words
What to Do About Gossip and Rumors
Quantum Computers Superposition Interference Entanglement and Quantum Error Correction Lesson 1 By: Professor Lili Saghafi
Black Holes, Firewalls, and the Limits of Quantum Computers
Black Holes, Firewalls, and the Complexity of States and Unitaries
Three Questions About Quantum Computing
Scott Aaronson (UT Austin) October 28, 2016
Black Holes, Firewalls, and the Limits of Quantum Computers
Quantum Computing and the Limits of the Efficiently Computable
Three Questions About Quantum Computing
Scott Aaronson (UT Austin) Lakeway Men’s Breakfast Club April 19, 2017
Fry Word Test First 300 words in 25 word groups
PHYS 202 Intro Physics II Catalog description: A continuation of PHYS 201 covering the topics of electricity and magnetism, light, and modern physics.
Computational Complexity and Fundamental Physics
Quantum Computing and the Quest for Quantum Computational Supremacy
Mr Barton’s Maths Notes
Quantum Computing and the Limits of the Efficiently Computable
The Computational Complexity of Decoding Hawking Radiation
Scott Aaronson (UT Austin) Bazaarvoice May 24, 2017
What Google Won’t Find: The Ultimate Physical Limits of Search
Quarter 1.
Quantum Computing and the Limits of the Efficiently Computable
The. the of and a to in is you that with.
What Quantum Computing Isn’t
The of and to in is you that it he for was.
Scott Aaronson (UT Austin) Papers and slides at
Quantum Computing Joseph Stelmach.
Presentation transcript:

Quantum Computing and the Limits of the Efficiently Computable ZOOM THRU ZOOM THRU Thanks so much for inviting me! When I typed “quantum computer” into Google Image Search, that’s the first picture that came up. That’s apparently what they look like. (I should warn you that I’m a theorist rather than an engineer.) Scott Aaronson ‘00 MIT

And when you type MY name in Google, THIS is still one of the first things that comes up. This is me with Charles van Loan in ‘98 or ‘99 or so, when I was an undergrad here, showing off a project I did for Lillian Lee’s natural language class. Cornell did hugely important things for my career, the first of which was to accept me. I was rejected almost everywhere I applied, because I had skipped grades, I had this weird academic trajectory, no sports, no music, no leadership qualities – I STILL don’t have those things, by the way! But Cornell was different. It gave me a chance, for which I remain profoundly grateful to this day. I’m particularly grateful to Bart Selman. (When you type HIS name in Google, autocomplete gives you “Bart sells his soul,” which is a classic Simpsons episode.) Bart was my friend and mentor. Without him, I don’t think I would’ve gone into research at all – so in that sense, I owe him everything. Bart also advised the AI group of Cornell’s RoboCup robot soccer team, which I was part of for two years. Working on RoboCup had a big influence on my engineering career, in that it’s the thing that convinced me not to have one. I mean, meeting deadlines, documenting your code, making it work with other people’s code? I stink at that stuff! Turning toward theory, Jon Kleinberg, who started at Cornell the same year I did, was my idol – I had no higher aspiration in life than to become one-millionth of what Jon Kleinberg was. Actually, I feel like I’m well on my way to achieving that. Daniela Rus told the story of how John Hopcroft told her not to do theory (it’s already done), and she listened to him, and that had a wonderful influence on her career. John Hopcroft was the dean when I was a freshman here, and when I had an “audience” with him, he also told ME not to go into theory. And I DIDN’T listen to him, and that ALSO had a great influence on my career. At least, I think it did. My last memory is of taking Ken Birman’s operating systems class, and getting an F on the midterm, and going to see him because I thought I understood the material and simply hadn’t written the solutions the way he wanted.

Moore’s Law ZOOM THRU Moore’s Law. I’m sure you’ve all seen this before: the number of transistors per computer has doubled pretty much every two years. This is arguably the main thing that’s driven the progress of human civilization since World War II.

Extrapolating: Robot uprising? First and most obvious is the robot uprising.

But even a killer robot would still be “merely” a Turing machine, operating on principles laid down in the 1930s… = But at a fundamental level, all computers we talk about today (even killer robots!) are still just a Turing machine – this theoretical device that was invented in the 1930s and that we teach about in our undergraduate courses. Macs, PCs, killer robots: on the inside, they’re all the same stuff. So is there anything else beyond that, that’s more interesting?

And it’s conjectured that thousands of interesting problems are inherently intractable for Turing machines… Is there any feasible way to solve these problems, consistent with the laws of physics?

Old proposal: Dip two glass plates with pegs between them into soapy water. Let the soap bubbles form a minimum Steiner tree connecting the pegs—thereby solving a known NP-hard problem “instantaneously”

Relativity Computer DONE But while we’re waiting for scalable quantum computers, we can also base computers on that other great theory of the 20th century, relativity! The idea here is simple: you start your computer working on some really hard problem, and leave it on earth. Then you get on a spaceship and accelerate to close to the speed of light. When you get back to earth, billions of years have passed on Earth and all your friends are long dead, but at least you’ve got the answer to your computational problem. I don’t know why more people don’t try it!

STEP 1 Zeno’s Computer STEP 2 Time (seconds) STEP 3 STEP 4 Another of my favorites is Zeno’s computer. The idea here is also simple: this is a computer that would execute the first step in one second, the next step in half a second, the next in a quarter second, and so on, so that after two seconds it’s done an infinite amount of computation. Incidentally, do any of you know why that WOULDN’T work? The problem is that, once you get down to the Planck time of 10^{-43} seconds, you’d need so much energy to run your computer that fast that, according to our best current theories, you’d exceed what’s called the Schwarzschild radius, and your computer would collapse to a black hole. You don’t want that to happen. STEP 3 STEP 4 STEP 5

Time Travel Computer S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent, Proceedings of the Royal Society A 465:631-647, 2009. arXiv:0808.2669. So OK, how about the TIME TRAVEL COMPUTER! The idea here is that, by creating a loop in time – a so-called “closed timelike curve” -- you could force the universe to solve some incredibly hard computational problem, just because that’s the only way to avoid a Grandfather Paradox and keep the laws of physics consistent. It would be like if you went back in time, and you told Shakespeare what plays he was going to write, and then he wrote them, and then you knew what the plays were because he wrote them … like, DUDE. You know, I’ve actually published a paper about this stuff. That was one of my MORE serious papers.

What About Quantum Mechanics? “Like probability, but with minus signs” Probability Theory: Linear transformations that conserve 1-norm of probability vectors: Stochastic matrices Quantum Mechanics: Linear transformations that conserve 2-norm of amplitude vectors: Unitary matrices So, let me first explain quantum mechanics in one slide. See, the physicists somehow convinced everyone that quantum mechanics is complicated and hard. The truth is, QM is unbelievably simple, once you take the physics out. What is quantum mechanics IS, fundamentally, is a certain generalization of the laws of probability. In probability theory, you always represent your knowledge of a system using a vector of nonnegative real numbers, which sum to 1 and which are called probabilities. As the system changes, you update your knowledge by applying a linear transformation to the vector. The linear transformation has to preserve the 1-norm; matrices that do that are called stochastic matrices. Quantum mechanics is almost the same, except now you represent your knowledge using a vector of complex numbers, called “amplitudes”. And instead of preserving the 1-norm of the vector, you preserve its 2-norm – which God or Nature seems to prefer over the 1-norm in every situation. Matrices that preserve 2-norm are called unitary matrices.

Quantum Computing Interesting Where we are: A QC has now factored 21 into 37, with high probability (Martín-López et al. 2012) Scaling up is hard, because of decoherence! But unless QM is wrong, there doesn’t seem to be any fundamental obstacle A general entangled state of n qubits requires ~2n amplitudes to specify: Presents an obvious practical problem when using conventional computers to simulate quantum mechanics Interesting As zany as this sounds, Deutsch’s speculations are part of what gave rise to the modern field of quantum computing. So, what’s the idea of quantum computing? Well, a general entangled state of n qubits requires 2^n amplitudes to specify, since you need to give an amplitude for every configuration of all n of the bits. That’s a staggering amount of information! It suggests that Nature, off to the side somewhere, needs to write down 2^1000 numbers just to keep track of 1000 particles. And that presents an obvious practical problem when people try to use conventional computers to SIMULATE quantum mechanics – they have all sorts of approximate techniques, but even then, something like 10% of supercomputer cycles today are used, basically, for simulating quantum mechanics. In 1981, Richard Feynman said, if Nature is going to all this work, then why not turn it around, and build computers that THEMSELVES exploit superposition? What would such computers be useful for? Well, at least one thing: simulating quantum physics! As tautological as that sounds, I predict that if QCs ever become practical, simulating quantum physics will actually be the main thing that they’re used for. That actually has *tremendous* applications to materials science, drug design, understanding high-temperature superconductivity, etc. But of course, what got everyone excited about this field was Peter Shor’s discovery, in 1994, that a quantum computer would be good for MORE than just simulating quantum physics. It could also be used to factor integers in polynomial time, and thereby break almost all of the public-key cryptography currently used on the Internet. (Interesting!) Where we are: After 18 years and more than a billion dollars, I’m proud to say that a quantum computer recently factored 21 into 3*7, with high probability. (For a long time, it was only 15.) Scaling up is incredibly hard because of decoherence – the external environment, as it were, constantly trying to measure the QC’s state and collapse it down to classical. With classical computers, it took more than 100 years from Charles Babbage until the invention of the transistor. Who knows how long it will take in this case? But unless quantum mechanics itself is wrong, there doesn’t seem to be any fundamental obstacle to scaling this up. On the contrary, we now know that, IF the decoherence can be kept below some finite but nonzero level, then there are very clever error-correcting codes that can render its remaining effects insignificant. So, I’m optimistic that if civilization lasts long enough, we’ll eventually have practical quantum computers. Feynman 1981: So then why not turn things around, and build computers that themselves exploit superposition? Shor 1994: Such a computer could do more than simulate QM—e.g., it could factor integers in polynomial time

But factoring is not believed to be NP-complete! And today, we don’t believe quantum computers can solve NP-complete problems in polynomial time in general (though not surprisingly, we can’t prove it) Bennett et al. 1997: “Quantum magic” won’t be enough If you throw away the problem structure, and just consider an abstract “landscape” of 2n possible solutions, then even a quantum computer needs ~2n/2 steps to find the correct one (That bound is actually achievable, using Grover’s algorithm!) If there’s a fast quantum algorithm for NP-complete problems, it will have to exploit their structure somehow

Quantum Adiabatic Algorithm (Farhi et al. 2000) Hf Hamiltonian with easily-prepared ground state Ground state encodes solution to NP-complete problem Problem: “Eigenvalue gap” can be exponentially small

Some Examples of My Research on Computational Complexity and Physics BosonSampling (with Alex Arkhipov): A proposal for a rudimentary photonic quantum computer, which doesn’t seem useful for anything (e.g. breaking codes), but does seem hard to simulate using classical computers (We showed that a fast, exact classical simulation would collapse the polynomial hierarchy to the third level) Experimentally demonstrated (with 3-4 photons…) by groups in Brisbane, Oxford, Vienna, and Rome!

Computational Complexity of Decoding Hawking Radiation Firewall Paradox (2012): Hypothetical experiment that involves waiting outside a black hole for ~1070 years, collecting all the Hawking photons it emits, doing a quantum computation on them, then jumping into the black hole to observe that your computation “nonlocally destroyed” the structure of spacetime inside the black hole Harlow-Hayden (2013): Argument that the requisite computation would take exponential time (~210^70 years) even for a QC—by which time the black hole has already fully evaporated! Recently, I strengthened Harlow and Hayden’s argument, to show that performing the computation is generically at least as hard as inverting a one-way function

Summary Quantum computing really is one of the most exciting things in science—just not for the reasons you usually hear Quantum computers are not known to provide any practically-important speedups for NP-complete problems (though they might provide modest ones, and they almost certainly provide speedups for problems like factoring and quantum simulation) And building them is hard (though the real shock for physics would be if they weren’t someday possible) On the other hand, one thing quantum computing has already done, is create a bridge between computer science and physics, carrying nontrivial insights in both directions