Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scott Aaronson Associate Professor, EECS

Similar presentations


Presentation on theme: "Scott Aaronson Associate Professor, EECS"— Presentation transcript:

1 Scott Aaronson Associate Professor, EECS
The Limits of Computation Quantum Computers and Beyond Scott Aaronson Associate Professor, EECS

2 Moore’s Law Moore’s Law. I’m sure you’ve all seen this before: the number of transistors per computer has doubled pretty much every two years. This is arguably the main thing that’s driven the progress of human civilization since World War II.

3 Extrapolating: Robot uprising?
First and most obvious is the robot uprising. Someday soon Google is going to become sentient, and will instruct all the computers on the Internet to enslave their owners. Don’t believe me? Google it!

4 But even a killer robot would still be “merely” a Turing machine, operating on principles laid down in the 1930s… = But at a fundamental level, all computers we talk about today (even killer robots!) are still just a Turing machine – this theoretical device that was invented in the 1930s and that we teach you about in our undergraduate courses. Macs, PCs, killer robots: on the inside, they’re all the same stuff. So is there anything else beyond that, that’s more interesting?

5 And it’s conjectured that thousands of interesting problems are inherently intractable for Turing machines… So if we extrapolate Moore’s Law, what can we look forward to next? Is there any feasible way to solve these problems, consistent with the laws of physics?

6 Relativity Computer DONE
But while we’re waiting for scalable quantum computers, we can also base computers on that other great theory of the 20th century, relativity! The idea here is simple: you start your computer working on some really hard problem, and leave it on earth. Then you get on a spaceship and accelerate to close to the speed of light. When you get back to earth, billions of years have passed on Earth and all your friends are long dead, but at least you’ve got the answer to your computational problem. I don’t know why more people don’t try it!

7 STEP 1 Zeno’s Computer STEP 2 Time (seconds) STEP 3 STEP 4
Another of my favorites is Zeno’s computer. The idea here is also simple: this is a computer that would execute the first step in one second, the next step in half a second, the next in a quarter second, and so on, so that after two seconds it’s done an infinite amount of computation. Incidentally, do any of you know why that WOULDN’T work? The problem is that, once you get down to the Planck time of 10^{-43} seconds, you’d need so much energy to run your computer that fast that, according to our best current theories, you’d exceed what’s called the Schwarzschild radius, and your computer would collapse to a black hole. You don’t want that to happen. STEP 3 STEP 4 STEP 5

8 Time Travel Computer S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent, Proceedings of the Royal Society A 465: , arXiv: So OK, how about the TIME TRAVEL COMPUTER! The idea here is that, by creating a loop in time – a so-called “closed timelike curve” -- you could force the universe to solve some incredibly hard computational problem, just because that’s the only way to avoid a Grandfather Paradox and keep the laws of physics consistent. It would be like if you went back in time, and you told Shakespeare what plays he was going to write, and then he wrote them, and then you knew what the plays were because he wrote them … like, DUDE. You know, I’ve actually published a paper about this stuff. That was one of my MORE serious papers.

9 Quantum Computers The first is quantum computers – yes, that’s really what they look like! This happens to be my research area. A quantum computer is a hypothetical machine that would exploit the wave nature of quantum mechanics to solve certain problems, like factoring integers and breaking most of the cryptographic codes used on the Internet, dramatically faster than we know how to solve them with any computer today. So, what’s been the progress so far in quantum computing? After 16 years, more than a billion of dollars of investment, and the building of ion-trap and nuclear-magnetic resonance devices the size of rooms, we’ve learned that, *with high probability*, 15=3x5. Alright, so maybe quantum computing still has a ways to go.

10 Quantum Mechanics in 1 Slide
“Like probability theory, but over the complex numbers” Probability Theory: Linear transformations that conserve 1-norm of probability vectors: Stochastic matrices Quantum Mechanics: Linear transformations that conserve 2-norm of amplitude vectors: Unitary matrices So, let me first explain quantum mechanics in one slide. See, the physicists somehow convinced everyone that quantum mechanics is complicated and hard. The truth is, QM is unbelievably simple, once you take the physics out. What is quantum mechanics IS, fundamentally, is a certain generalization of the laws of probability. In probability theory, you always represent your knowledge of a system using a vector of nonnegative real numbers, which sum to 1 and which are called probabilities. As the system changes, you update your knowledge by applying a linear transformation to the vector. The linear transformation has to preserve the 1-norm; matrices that do that are called stochastic matrices. Quantum mechanics is almost the same, except now you represent your knowledge using a vector of complex numbers, called “amplitudes”. And instead of preserving the 1-norm of the vector, you preserve its 2-norm – which God or Nature seems to prefer over the 1-norm in every situation. Matrices that preserve 2-norm are called unitary matrices.

11 Interference “The source of all quantum weirdness”
Possible states of a single quantum bit, or qubit: Now, as Richard Feynman liked to stress, the source of ALL the “weirdness” of the quantum world is a single thing: the fact that, whereas probabilities are nonnegative and can only add, amplitudes can be positive OR negative, and so can interfere and cancel each other out. You can already see this by considering a single quantum bit, or “qubit”: a system that can be in two perfectly-distinguishable states, which we label 0 and 1, and which physicists like to represent using these little asymmetric brackets called “kets” (you get used to them with time). If we start with a qubit in the state |0>, and apply this unitary transformation here, its effect is to rotate the qubit by 45 degrees counterclockwise in the plane – producing an equal “superposition,” as we say, of the |0> and |1> states. But if you apply the same unitary a second time, then you just get definitely to the state |1>. In fact, this unitary can be seen as “the square root of a NOT gate” – something that’s already impossible in the classical world. You can understand what’s happening in terms of interference. We applied a linear transformation that mapped |0> to a superposition of |0> and |1>, and then mapped the |0> to a superposition of |0> and |1>, and the |1> to a superposition of MINUS |0> and |1>. So, there were two different ways of getting from |0> back to |0>, but those ways “interfered destructively” and cancelled each other out, leaving only the |1>.

12 Quantum Computing “Quantum Mechanics on Steroids”
Where we are: A QC has now factored 21 into 37, with high probability (Martín-López et al. 2012) Scaling up is hard, because of decoherence! But unless QM is wrong, there doesn’t seem to be any fundamental obstacle A general entangled state of n qubits requires ~2n amplitudes to specify: Presents an obvious practical problem when using conventional computers to simulate quantum mechanics Interesting As zany as this sounds, Deutsch’s speculations are part of what gave rise to the modern field of quantum computing. So, what’s the idea of quantum computing? Well, a general entangled state of n qubits requires 2^n amplitudes to specify, since you need to give an amplitude for every configuration of all n of the bits. That’s a staggering amount of information! It suggests that Nature, off to the side somewhere, needs to write down 2^1000 numbers just to keep track of 1000 particles. And that presents an obvious practical problem when people try to use conventional computers to SIMULATE quantum mechanics – they have all sorts of approximate techniques, but even then, something like 10% of supercomputer cycles today are used, basically, for simulating quantum mechanics. In 1981, Richard Feynman said, if Nature is going to all this work, then why not turn it around, and build computers that THEMSELVES exploit superposition? What would such computers be useful for? Well, at least one thing: simulating quantum physics! As tautological as that sounds, I predict that if QCs ever become practical, simulating quantum physics will actually be the main thing that they’re used for. That actually has *tremendous* applications to materials science, drug design, understanding high-temperature superconductivity, etc. But of course, what got everyone excited about this field was Peter Shor’s discovery, in 1994, that a quantum computer would be good for MORE than just simulating quantum physics. It could also be used to factor integers in polynomial time, and thereby break almost all of the public-key cryptography currently used on the Internet. (Interesting!) Where we are: After 18 years and more than a billion dollars, I’m proud to say that a quantum computer recently factored 21 into 3*7, with high probability. (For a long time, it was only 15.) Scaling up is incredibly hard because of decoherence – the external environment, as it were, constantly trying to measure the QC’s state and collapse it down to classical. With classical computers, it took more than 100 years from Charles Babbage until the invention of the transistor. Who knows how long it will take in this case? But unless quantum mechanics itself is wrong, there doesn’t seem to be any fundamental obstacle to scaling this up. On the contrary, we now know that, IF the decoherence can be kept below some finite but nonzero level, then there are very clever error-correcting codes that can render its remaining effects insignificant. So, I’m optimistic that if civilization lasts long enough, we’ll eventually have practical quantum computers. Feynman 1981: So then why not turn things around, and build computers that themselves exploit superposition? Shor 1994: Such a computer could do more than simulate QM—e.g., it could factor integers in polynomial time

13 But factoring is not believed to be NP-complete!
And today, we don’t believe quantum computers can solve NP-complete problems in polynomial time in general (though not surprisingly, we can’t prove it) Bennett et al. 1997: “Quantum magic” won’t be enough If you throw away the problem structure, and just consider an abstract “landscape” of 2n possible solutions, then even a quantum computer needs ~2n/2 steps to find the correct one (That bound is actually achievable, using Grover’s algorithm!) So, is there any quantum algorithm for NP-complete problems that would exploit their structure?

14 Quantum Adiabatic Algorithm (Farhi et al. 2000)
Hf Hamiltonian with easily-prepared ground state Ground state encodes solution to NP-complete problem Problem: “Eigenvalue gap” can be exponentially small

15 Some of My Recent Research
BosonSampling (with Alex Arkhipov): A proposal for a rudimentary optical quantum computer, which doesn’t seem useful for anything (e.g. breaking codes), but does seem hard to simulate using classical computers Computational Complexity of Decoding Hawking Radiation: Building on a striking recent proposal by Harlow and Hayden—that part of the resolution of the black hole information problem might be that reconstructing the infalling information from the Hawking radiation would require an exponentially long computation


Download ppt "Scott Aaronson Associate Professor, EECS"

Similar presentations


Ads by Google