Scott Aaronson (UT Austin) UNM, Albuquerque, October 18, 2018

Slides:



Advertisements
Similar presentations
Quantum Lower Bounds The Polynomial and Adversary Methods Scott Aaronson September 14, 2001 Prelim Exam Talk.
Advertisements

How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
The Learnability of Quantum States Scott Aaronson University of Waterloo.
Quantum t-designs: t-wise independence in the quantum world Andris Ambainis, Joseph Emerson IQC, University of Waterloo.
Quantum Software Copy-Protection Scott Aaronson (MIT) |
The Future (and Past) of Quantum Lower Bounds by Polynomials Scott Aaronson UC Berkeley.
Multilinear Formulas and Skepticism of Quantum Computing Scott Aaronson UC Berkeley IAS.
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
How Much Information Is In A Quantum State? Scott Aaronson MIT |
Quantum Double Feature Scott Aaronson (MIT) The Learnability of Quantum States Quantum Software Copy-Protection.
Lower Bounds for Local Search by Quantum Arguments Scott Aaronson (UC Berkeley) August 14, 2003.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
QMA/qpoly PSPACE/poly: De-Merlinizing Quantum Protocols Scott Aaronson University of Waterloo.
Efficient Discrete-Time Simulations of Continuous- Time Quantum Query Algorithms QIP 2009 January 14, 2009 Santa Fe, NM Rolando D. Somma Joint work with.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Gate robustness: How much noise will ruin a quantum gate? Aram Harrow and Michael Nielsen, quant-ph/0212???
Privacy without Noise Yitao Duan NetEase Youdao R&D Beijing China CIKM 2009.
Quantum Search Algorithms for Multiple Solution Problems EECS 598 Class Presentation Manoj Rajagopalan.
Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003.
Radial Basis Function Networks
Multiplicative Weights Algorithms CompSci Instructor: Ashwin Machanavajjhala 1Lecture 13 : Fall 12.
Using Data Privacy for Better Adaptive Predictions Vitaly Feldman IBM Research – Almaden Foundations of Learning Theory, 2014 Cynthia Dwork Moritz Hardt.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
A generalization of quantum Stein’s Lemma Fernando G.S.L. Brandão and Martin B. Plenio Tohoku University, 13/09/2008.
Coherent Classical Communication Aram Harrow, MIT Quantum Computing Graduate Research Fellow Objective Objective ApproachStatus Determine.
Itay Hen Dec 12, 2015 NIPS Quantum Machine Learning Workshop Itay Hen Information Sciences Institute, USC NIPS Quantum Machine Learning Workshop December.
Online Learning Model. Motivation Many situations involve online repeated decision making in an uncertain environment. Deciding how to invest your money.
University of Texas at El Paso
Scott Aaronson (UT Austin)
PAC-Learning and Reconstruction of Quantum States
Private Data Management with Verification
Probabilistic Algorithms
The complexity of the Separable Hamiltonian Problem
Dan Roth Department of Computer and Information Science
12. Principles of Parameter Estimation
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin)
MPC and Verifiable Computation on Committed Data
Understanding Generalization in Adaptive Data Analysis
A low cost quantum factoring algorithm
Firewalls, AdS/CFT, and Computational Complexity
Shadow Tomography of Quantum States
Differential Privacy in Practice
Hidden Markov Models Part 2: Algorithms
Shadow Tomography of Quantum States
Scott Aaronson (UT Austin)
Scott Aaronson (UT Austin) MIT, November 20, 2018
Probabilistic Models with Latent Variables
The Curve Merger (Dvir & Widgerson, 2008)
Quantum Information Theory Introduction
New Results on Learning and Reconstruction of Quantum States
New Results on Learning and Reconstruction of Quantum States
Certified Randomness from Quantum Supremacy Scott Aaronson (UT Austin)
Summarizing Data by Statistics
New Results on Learning and Reconstruction of Quantum States
Quantum Computing and the Quest for Quantum Computational Supremacy
Online Learning of Quantum States Scott Aaronson (UT Austin)
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Gentle Measurement of Quantum States and Differential Privacy
The Computational Complexity of Decoding Hawking Radiation
Bayesian Deep Learning on a Quantum Computer
Learning From Observed Data
Quantum Computing Joseph Stelmach.
12. Principles of Parameter Estimation
CS639: Data Management for Data Science
Gentle Measurement of Quantum States and Differential Privacy *
Some contents are borrowed from Adam Smith’s slides
Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Lecture 4 (2005) Richard Cleve DC 653
Differential Privacy (1)
Quantum Lower Bounds Via Laurent Polynomials
Presentation transcript:

Scott Aaronson (UT Austin) UNM, Albuquerque, October 18, 2018 Gentle Measurement of Quantum States and Differential Privacy  Scott Aaronson (UT Austin) UNM, Albuquerque, October 18, 2018 Joint work with Guy Rothblum (in preparation)

Gentle Measurement Measurements in QM are famously destructive … but not always! Information/disturbance tradeoff: outcome predictable given knowledge of   nobody needs to get hurt Given a quantum measurement M on n registers, let’s call M -gentle on a set of states S if one can implement it so that for every S and possible outcome y of M, Typical choice for S: Product states on n registers

Example: Measuring the total Hamming weight of n unentangled qubits could be extremely destructive But measuring the Hamming weight plus noise of order >>n is much safer! A good choice is Laplace noise:

Actually used now by Apple and Google… Differential Privacy A recent subfield of classical CS that wants to protect you Given an algorithm A that queries a database X=(x1,…,xn), we call A -DP if for every two databases X and X’ that differ in only a single xi, and every possible output y of A, Bad: How many of these patients have prostate cancer? Actually used now by Apple and Google… Better: Return the number of patients with prostate cancer, plus Laplace noise of average magnitude  Why it’s 1/-DP:

Quantum Differential Privacy “Protecting the Privacy Rights of Quantum States” Given a quantum measurement M on n registers, let’s call M -DP on a set of states S if for every ,’S that differ by a channel acting on only 1 register, and every possible outcome y of M, Typical choice for S: Product states on n registers Example: Once again, measuring the total Hamming weight of n unentangled qubits plus a Laplace noise term Hmmmm….

Our DPGentleness Theorem (1) M is -gentle for <<1/4  M is O()-DP (2) M is -DP on product states, and consists of a classical DP algorithm applied to the results of separate POVMs on each register  M is O(n)-gentle on product states Notes: Both directions are asymptotically tight Restriction to product states is essential for part (2) (without it we only get O(n)-gentleness) Part (2) preserves efficiency, as long as the DP algorithm’s output distribution can be efficiently QSampled

Related Work Dwork et al. (2014) made a striking connection between differential privacy and adaptive data analysis Can we safely reuse the same dataset for many scientific studies? They showed that the answer is yes—if we’re careful to access the dataset using DP algorithms only! I.e., they connected DP to “classical Bayesian gentleness” Of course, damage to a distribution D is purely internal and mental, whereas damage to a state  is often noticeable even by others measuring …

GentlenessDP The easy direction! Has nothing to do with product states or even quantum mechanics Lemma: If M is -gentle on all states, then it’s (Consider the converse: if M accepts  and  with very different probabilities, then it’s going to damage an equal mixture of them) Now just apply the lemma separately to each register of our product state

DPGentleness for Product States The harder direction; known only for measurements that apply separate POVMs to the n registers First step: Let A be an -DP classical algorithm. Then for any product distribution D=D1Dn and output y of A, Next step: Given the “QSampled” version of D, |=|1|n, and any output y of A, Can then generalize to POVMs and mixed states

Separating Examples Measure the Hamming weight plus Laplace noise of magnitude ~n/10: 10/n-DP, yet not even 1/3-gentle on non-product states like Measure Hamming weight + Laplace noise of magnitude ~n: ~1/n-DP, yet not o(1)-gentle on product states Theorem: There exist measurements that are 1/exp(n)-DP (indeed, nearly trivial) on product states, yet far from DP on entangled states 0-DP (i.e., completely trivial) on product states does imply 0-DP on all states, but only because amplitudes are complex numbers!

Application: Shadow Tomography The Task (A. 2016): Let  be an unknown D-dimensional mixed state. Let E1,…,EM be known 2-outcome POVMs. Estimate Pr[Ei accepts ] to within  for all i[M]—the “shadows” that  casts on E1,…,EM—with high probability, by measuring as few copies of  as possible Clearly k=O(D2) copies suffice (do ordinary tomography) Clearly k=O(M) suffice (apply each Ei to separate copies) But what if we wanted to know, e.g., the behavior of an n-qubit state on all accept/reject circuits with n2 gates? Could we do

Theorem (A., STOC’2018): Shadow tomography is possible using only copies My protocol combined: The multiplicative weights update method (i.e., start with the “maximally stupid hypothesis,” 0=I/D, and then repeatedly look for opportunities to update, via postselecting on Tr(Eit)Tr(Ei) for some i) The “Quantum OR Bound” (A. 2006  Harrow et al. 2017), which repeatedly picks out an informative measurement from E1,…,EM in a gentle way

New Result: We can do shadow tomography using copies of , via a procedure that’s also online and gentle (and simpler than my previous one, and probably more amenable to experimental implementation) How it works: we take a known procedure from DP, Private Multiplicative Weights (Hardt-Rothblum 2010), which decides whether to update our current hypothesis on each query Ei using a threshold measurement with Laplace noise added We give a quantum analogue, QPMW Since each iteration of QPMW is DP (and applies separate POVMs to each register), it’s also gentle on product states, so we can safely apply all M of the iterations in sequence

Open Problems Prove a fully general DPgentleness theorem for product states? Or even near-trivialitygentleness for product states? In shadow tomography, does the number of copies of  need to have any dependence on log D? Best lower bound we can show: k=(-2 log M). But for gentle shadow tomography, can use known lower bounds from DP to show that k=((log D)) is needed Composition of quantum DP algorithms? Use quantum to say something new about classical DP?