The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012.

Slides:



Advertisements
Similar presentations
Closed Timelike Curves Make Quantum and Classical Computing Equivalent
Advertisements

BosonSampling Scott Aaronson (MIT) Talk at BBN, October 30, 2013.
How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
The Learnability of Quantum States Scott Aaronson University of Waterloo.
Quantum Software Copy-Protection Scott Aaronson (MIT) |
The Future (and Past) of Quantum Lower Bounds by Polynomials Scott Aaronson UC Berkeley.
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
How Much Information Is In A Quantum State? Scott Aaronson MIT |
Quantum Double Feature Scott Aaronson (MIT) The Learnability of Quantum States Quantum Software Copy-Protection.
An Invitation to Quantum Complexity Theory The Study of What We Cant Do With Computers We Dont Have Scott Aaronson (MIT) QIP08, New Delhi BQP NP- complete.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Scott Aaronson Institut pour l'Étude Avançée Le Principe de la Postselection.
Quantum Information and the Brain Scott Aaronson (MIT) NIPS 2012, Lake Tahoe | ?
The Equivalence of Sampling and Searching Scott Aaronson MIT.
The Computational Complexity of Linear Optics Scott Aaronson and Alex Arkhipov MIT vs.
Quantum Computing with Noninteracting Bosons
From EPR to BQP Quantum Computing as 21 st -Century Bell Inequality Violation Scott Aaronson (MIT)
New Computational Insights from Quantum Optics Scott Aaronson.
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson (MIT) Joint work with Alex Arkhipov.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
The Computational Complexity of Linear Optics Scott Aaronson (MIT) Joint work with Alex Arkhipov vs.
Quantum Computing and the Limits of the Efficiently Computable
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Scott Aaronson (MIT) Based on joint work with John Watrous (U. Waterloo) BQP PSPACE Quantum Computing With Closed Timelike Curves.
Scott Aaronson (MIT) The Limits of Computation: Quantum Computers and Beyond.
New Computational Insights from Quantum Optics Scott Aaronson Based on joint work with Alex Arkhipov.
Computational problems, algorithms, runtime, hardness
March 11, 2015CS21 Lecture 271 CS21 Decidability and Tractability Lecture 27 March 11, 2015.
Computational Phenomena in Physics Scott Aaronson MIT.
Exploring the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
CNS2009handout 21 :: quantum cryptography1 ELEC5616 computer and network security matt barrie
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
Exploring the Limits of the Efficiently Computable Research Directions I Like In Complexity and Physics Scott Aaronson (MIT) Papers and slides at
Exploring the Limits of the Efficiently Computable (Or: Assorted things I’ve worked on, prioritizing variety over intellectual coherence) Scott Aaronson.
Quantum Computation for Dummies Dan Simon Microsoft Research UW students.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT)
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
October 1 & 3, Introduction to Quantum Computing Lecture 1 of 2 Introduction to Quantum Computing Lecture 1 of 2
The Road to Quantum Computing: Boson Sampling Nate Kinsey ECE 695 Quantum Photonics Spring 2014.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT.
A Study of Error-Correcting Codes for Quantum Adiabatic Computing Omid Etesami Daniel Preda CS252 – Spring 2007.
Nawaf M Albadia

Quantum Corby Ziesman Computing. Future of Computing? Transistor-based Computing –Move towards parallel architectures Biological Computing –DNA computing.
Barriers in Quantum Computing (And How to Smash Them Through Closer Interactions Between Classical and Quantum CS) Day Classical complexity theorists,
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
Verification of BosonSampling Devices Scott Aaronson (MIT) Talk at Simons Institute, February 28, 2014.
Capabilities and limitations of quantum computers Michele Mosca 1 November 1999 ECC ’99.
An Introduction to Quantum Computation Sandy Irani Department of Computer Science University of California, Irvine.
Quantum Computation Stephen Jordan. Church-Turing Thesis ● Weak Form: Anything we would regard as “computable” can be computed by a Turing machine. ●
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT) Papers & slides at
Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson (MIT  UT Austin) NYSC, West Virginia, June 24, 2016.
Scott Aaronson (MIT) April 30, 2014
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Introduction to Quantum Computing Lecture 1 of 2
Scott Aaronson Associate Professor, EECS
Bio Scott Aaronson is David J. Bruton Centennial Professor of Computer Science at the University of Texas at Austin.  He received his bachelor's from Cornell.
Shadow Tomography of Quantum States
Scott Aaronson (MIT) Talk at SITP, February 21, 2014
Based on joint work with Alex Arkhipov
BosonSampling Scott Aaronson (University of Texas, Austin)
Scott Aaronson (UT Austin) Lakeway Men’s Breakfast Club April 19, 2017
3rd Lecture: QMA & The local Hamiltonian problem (CNT’D)
OSU Quantum Information Seminar
Quantum Computing and the Quest for Quantum Computational Supremacy
Scott Aaronson (UT Austin) Bazaarvoice May 24, 2017
Scott Aaronson (UT Austin) Papers and slides at
Closed Timelike Curves Make Quantum and Classical Computing Equivalent
Presentation transcript:

The Kind of Stuff I Think About Scott Aaronson (MIT) LIDS Lunch, October 29, 2013 Abridged version of plenary talk at NIPS’2012

“Like probability theory, but over the complex numbers” Quantum Mechanics in 1 Slide Quantum Mechanics: Linear transformations that conserve 2-norm of amplitude vectors: Unitary matrices Probability Theory: Linear transformations that conserve 1-norm of probability vectors: Stochastic matrices

“The source of all quantum weirdness” Interference Possible states of a single quantum bit, or qubit:

If you ask  |0  +  |1  whether it’s |0  or |1 , it answers |0  with probability |  | 2 and |1  with probability |  | 2. And it sticks with its answer from then on! Measurement Measurement is a “destructive” process:

The “deep mystery” of QM: Who decides when a “measurement” happens? An “outsider’s view”: Taking this seriously leads to the Many-Worlds Interpretation Product state of two qubits: Entangled state (can’t be written as product state): The qubit simply gets entangled with your own brain (and lots of other stuff), so that it collapses to |0  or |1  “relative to you” The Options, As I See It: 1.Many-Worlds (or some wordier equivalent) 2.Radical new physics (e.g., dynamical collapse) 3.“Shut up and stop asking”

A general entangled state of n qubits requires ~2 n amplitudes to specify: Quantum Computing “Quantum Mechanics on Steroids” Presents an obvious practical problem when using conventional computers to simulate quantum mechanics Feynman 1981: So then why not turn things around, and build computers that themselves exploit superposition? Shor 1994: Such a computer could do more than simulate QM—e.g., it could factor integers in polynomial time Interesting Where we are: A QC has now factored 21 into 3  7, with high probability (Martín-López et al. 2012) Scaling up is hard, because of decoherence! But unless QM is wrong, there doesn’t seem to be any fundamental obstacle

Contrary to almost every popular article on the subject, a QC would not let you “try all answers in parallel and instantly pick the best one”! The Limits of Quantum Computers Problem: Measuring just gives you a random answer, with Pr[x]=|  x | 2. Need to use interference to give the right answer a large amplitude. Only known how to do that exponentially quickly for special problems like factoring Prevailing Belief: NP  BQP (there is no polynomial-time quantum algorithm for the NP-complete problems) Bennett et al. 1994: Even a quantum computer needs  (  N) steps to search an unstructured list of size N Actually achievable, using Grover’s algorithm!

But could a quantum computer solve NP-hard optimization problems—e.g., in machine learning—in polynomial time by exploiting the problems’ structure? HiHi Hamiltonian with easily- prepared ground state HfHf Ground state encodes solution to NP-complete problem Famous attempt to do so: the Quantum Adiabatic Algorithm (Farhi et al. 1999) “Simulated annealing enhanced by quantum tunneling”

Problem: “Eigenvalue gap” can be exponentially small What we know: On some fitness landscapes, the adiabatic algorithm can reach a global minimum exponentially faster than classical simulated annealing. But on other landscapes, it does the same or even worse. To know what sort of behavior predominates in practice, would help to have a QC to run tests with! ?

Theorem (A. 2004): Given an n-qubit state | , suppose you only care about |  ’s behavior on 2-outcome measurements in a finite set S. There exists a subset T  S of size O(n log n) such that, if you start with  = the maximally mixed state, then postselect on Tr(M  )  |M|  for all M  T, you end up with a state  such that Tr(M  )  |M|  for all M  S. Proof Idea: “Darwinian winnowing process,” like boosting Can n qubits really contain ~2 n classical bits? A machine-learning response… Means: We can describe |  ’s behavior on 2 n measurements using only O(n 2 log n) classical bits! | 

Theorem (A. 2006): Given an n-qubit state | , suppose you only care about |  ’s behavior on 2-outcome measurements drawn from a distribution D. Given k=O(n) sample measurements M 1,…,M k drawn independently from D, suppose you can find any “hypothesis state” |  such that  |M i |  |M i |  for all i  [k]. Then with high probability over M 1,…,M k, you’ll also have  |M|  |M|  for most M~D. Might have actual applications in quantum state tomography Proof Idea: Show that, as a hypothesis class, n-qubit states have “  -fat-shattering dimension” only O(n/  2 )

A.-Dechter 2008

(Closely related to the Uncertainty Principle) The No-Cloning Theorem: No physical procedure can copy an unknown quantum state

Applications of the No-Cloning Theorem Quantum money (Wiesner 1969) : Could be verified by bank but not copied Quantum key distribution (BB84) : Already commercial! (Though market remains small) Quantum copy-protected software (A. 2009, A.-Christiano in progress) : A state |  f  that you can use to evaluate some function f on inputs x of your choice, but can’t efficiently use to produce more states that also let you evaluate f (A.-Christiano 2012: Under plausible cryptographic assumptions, quantum money that anyone could verify)

The starting point: Suppose you send n identical photons through a network of beamsplitters. Then the amplitude for the photons to reach some final state is given by the permanent of an n  n matrix of complex numbers: But the permanent is #P-complete (believed even harder than NP- complete)! So how can Nature do such a thing? Resolution: Amplitudes aren’t directly observable, and require exponentially-many probabilistic trials to estimate BosonSampling (A.-Arkhipov 2011)

# of experiments > # of photons! Last year, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents Recently, however, Arkhipov and I gave evidence that the observed output distribution of such a linear-optical network would be hard to simulate using a classical computer, by indirectly exploiting the #P-completeness of the permanent

Goal (in our view): Scale to photons Don’t want to scale much beyond that—both because (1)you probably can’t without fault-tolerance, and (2)a classical computer probably couldn’t even verify the results! Obvious Challenges for Scaling Up: -Reliable single-photon sources -Minimizing losses -Getting high probability of n-photon coincidence Theoretical Challenge: Argue that, even with photon losses and messier initial states, you’re still solving a classically-intractable sampling problem