Download presentation
Presentation is loading. Please wait.
Published byDeddy Makmur Modified over 5 years ago
1
Online Learning of Quantum States Scott Aaronson (UT Austin)
Scott Aaronson (UT Austin) Joint work with Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak arXiv: / NeurIPS 2018
2
An n-qubit pure state requires 2n complex numbers to specify, even approximately:
Yet measuring yields at most n bits (Holevo’s Theorem) So should we say that the 2n complex numbers are “really there” in a single copy of |—or “just in our heads”? A probability distribution over n-bit strings also involves 2n real numbers. But probabilities don’t interfere!
3
Quantum State Tomography
Task: Given lots of copies of an unknown D-dimensional quantum mixed state , produce an approximate classical description of O’Donnell and Wright and Haah et al., STOC’2016: ~(D2) copies of are necessary and sufficient for this Experimental Record (Song et al. 2017): 10 qubits, millions of measurement settings! Keep in mind: D = 2n
4
Quantum Occam’s Razor Theorem (A. 2006)
Let be an unknown D-dimensional state Suppose you just want to be able to estimate the acceptance probabilities of most measurements E drawn from some probability distribution Then it suffices to do the following, for some m=O(log D): Choose m measurements independently from Go into your lab and estimate acceptance probabilities of all of them on Find any “hypothesis state” approximately consistent with all measurement outcomes “Quantum states are PAC-learnable”
5
Can prove by combining two facts:
(1) The class of [0,1]-valued functions of the form f(E)=Tr(E), where is a D-dimensional mixed state, has -fat-shattering dimension O((log D)/2) Largest k for which we can find inputs x1,…,xk, and values a1,…,ak[0,1], such that all 2k possible behaviors involving f(xi) exceeding ai by or vice versa are realized by some f in the class (2) Any class of [0,1]-valued functions is PAC-learnable using a number of samples linear in its fat-shattering dimension [Alon et al., Bartlett-Long]
6
To upper-bound the fat-shattering dimension of quantum states:
Use the lower bound for quantum random access codes [Nayak 1999]. Namely: You need (n) qubits to encode n bits x1,…,xn into a state , so that any xi of your choice can later be recovered w.h.p. by measuring Then turn this lemon into lemonade!
7
How do we find the hypothesis state?
Here’s one way: let b1,…,bm be the outcomes of measurements E1,…,Em Then choose a hypothesis state to minimize This is a convex programming problem, which can be solved in time polynomial in D=2n (good enough in practice for n15 or so) Optimized, linear-time iterative method for this problem: [Hazan 2008]
8
You know, PAC-learning is so 1990s
You know, PAC-learning is so 1990s. Who wants to assume a fixed, unchanging distribution over the sample data? These days all the cool kids prove theorems about online learning. What’s online learning?
9
Online Learning Two-outcome measurements E1,E2,… on a D-dimensional state arrive one by one—chosen by an adaptive adversary No more fixed distribution over measurements, no independence, no nothin’! For each Et, you—the learner—are challenged to guess Tr(Et). If you’re off by more than /3, you’re then told the true value, or at least an /3-approximation to it Goal: Give a learning strategy that upper bounds the total number of times your guess will ever be more than off from the truth (can’t know in advance which times those will be…)
10
Postselected learning Sequential fat-shattering dimension
Online Convex Optimization / Matrix Multiplicative Weights
11
Error can be either L1 or L2
Theorem 1: There’s an explicit strategy, for online learning of an n-qubit quantum state, that’s wrong by > at most O(n/2) times (and this is tight) Theorem 2: Even if the data the adversary gives you isn’t consistent with any n-qubit quantum state, there’s still an explicit strategy such that your total regret, after T iterations, is at most Tight for L1, possibly not for L2 Regret = (Your total error) – (Total error if you’d started with the best hypothesis state from the very beginning) Error can be either L1 or L2
12
My Way: Postselected Learning
1 3 2 I/2n In the beginning, the learner knows nothing about , so he guesses it’s the maximally mixed state 0 = I/2n Each time the learner encounters a measurement Et on which his current hypothesis t-1 badly fails, he tries to improve—by letting t be the state obtained by starting from t-1, then performing Et and postselecting on getting the right outcome Amplification + Gentle Measurement Lemma are used to bound the damage caused by these measurements
13
Solving, we find that t = O(n log(n))
Let = const for simplicity Crucial Claim: The iterative learning procedure must converge to log(n), after at most T=O(n log(n)) serious mistakes Proof: Let pt = Pr[first t postselections all succeed]. Then If pt wasn’t less than, say, (2/3)pt-1, learning would’ve ended! Solving, we find that t = O(n log(n))
14
Ashwin’s Way: Sequential Fat-Shattering Dimension
New generalization of Ashwin’s random access codes lower bound from 1999: You can store at most m=O(n) bits x=(x1,…,xm) in an n-qubit state x, in such a way that any one xi of your choice can later be read out w.h.p. by measuring x—even if the measurement basis is allowed to depend on x1,…,xi-1 Implies an O(n/2) upper bound for the “online / sequential” version of -fat-shattering dimension Combined with a general result of [Rakhlin et al. 2015], automatically implies online learning alg + regret bound
15
Elad, Xinyi, Satyen’s Way: Online Convex Optimization
Regularized Follow-the-Leader [Hazan 2015] Gradient descent using von Neumann entropy Matrix Multiplicative Weights [Arora and Kale 2016] Technical work: Generalize power tools that already existed for real matrices to complex Hermitian ones Yields optimal mistake bound and regret bound
16
Application: Shadow Tomography
The Task (A. 2016): Let be an unknown D-dimensional mixed state. Let E1,…,EM be known 2-outcome POVMs. Estimate Pr[Ei accepts ] to within for all i[M]—the “shadows” that casts on E1,…,EM—with high probability, by measuring as few copies of as possible Clearly k=O(D2) copies suffice (do ordinary tomography) Clearly k=O(M) suffice (apply each Ei to separate copies) But what if we wanted to know, e.g., the behavior of an n-qubit state on all accept/reject circuits with n2 gates? Could we have
17
Theorem (A., STOC’2018): It’s possible to do Shadow Tomography using only copies
Proof (in retrospect): Just combine two ingredients! The “Quantum OR Bound” [A. 2006, corrected by Harrow-Montanaro-Lin 2017], to repeatedly search for a measurement Ei such that Tr(Eit) is far from Tr(Ei), without much damaging the ’s. Here t is our current hypothesis state (initially, 0 = maximally mixed) Our online learning theorem, to upper-bound the number of updates to t until we converge on a hypothesis state T s.t. Tr(EiT)Tr(Ei) for every i[M]
18
New Shadow Tomography Protocol. [A
New Shadow Tomography Protocol! [A.-Rothblum 2019, coming any day to an arXiv near you] Exploits a new connection that Guy and I discovered between gentle measurement of quantum states, and differential privacy (an area of classical CS) Based on the “Private Multiplicative Weights” algorithm [Hardt-Rothblum 2010] Uses this many copies of : while also being online and gentle (properties we didn’t have before) Explicitly uses online learning theorem as a key ingredient
19
Open Problems Generalize to k-outcome measurements
Optimal regret for L2 loss What special cases of online learning of quantum states can be done with only poly(n) computation? What’s the true sample complexity of shadow tomography?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.