Gentle Measurement of Quantum States and Differential Privacy

Slides:



Advertisements
Similar presentations
Quantum Lower Bounds The Polynomial and Adversary Methods Scott Aaronson September 14, 2001 Prelim Exam Talk.
Advertisements

How Much Information Is In Entangled Quantum States? Scott Aaronson MIT |
The Learnability of Quantum States Scott Aaronson University of Waterloo.
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
How Much Information Is In A Quantum State? Scott Aaronson MIT |
Lower Bounds for Local Search by Quantum Arguments Scott Aaronson (UC Berkeley) August 14, 2003.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Efficient Discrete-Time Simulations of Continuous- Time Quantum Query Algorithms QIP 2009 January 14, 2009 Santa Fe, NM Rolando D. Somma Joint work with.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #22.
Gate robustness: How much noise will ruin a quantum gate? Aram Harrow and Michael Nielsen, quant-ph/0212???
Privacy without Noise Yitao Duan NetEase Youdao R&D Beijing China CIKM 2009.
Quantum Search Algorithms for Multiple Solution Problems EECS 598 Class Presentation Manoj Rajagopalan.
Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003.
Multiplicative Weights Algorithms CompSci Instructor: Ashwin Machanavajjhala 1Lecture 13 : Fall 12.
Using Data Privacy for Better Adaptive Predictions Vitaly Feldman IBM Research – Almaden Foundations of Learning Theory, 2014 Cynthia Dwork Moritz Hardt.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
The Sparse Vector Technique CompSci Instructor: Ashwin Machanavajjhala 1Lecture 12 : Fall 12.
Coherent Classical Communication Aram Harrow, MIT Quantum Computing Graduate Research Fellow Objective Objective ApproachStatus Determine.
Itay Hen Dec 12, 2015 NIPS Quantum Machine Learning Workshop Itay Hen Information Sciences Institute, USC NIPS Quantum Machine Learning Workshop December.
Online Learning Model. Motivation Many situations involve online repeated decision making in an uncertain environment. Deciding how to invest your money.
MLPR - Questions. Can you go through integration, differentiation etc. Why do we need priors? Difference between prior and posterior. What does Bayesian.
University of Texas at El Paso
Scott Aaronson (UT Austin)
PAC-Learning and Reconstruction of Quantum States
Private Data Management with Verification
Probabilistic Algorithms
The complexity of the Separable Hamiltonian Problem
Dan Roth Department of Computer and Information Science
Richard Cleve DC 2117 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Lecture 16 (2009) Richard.
12. Principles of Parameter Estimation
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
Scott Aaronson (UT Austin)
Introduction to Quantum Computing Lecture 1 of 2
MPC and Verifiable Computation on Committed Data
Understanding Generalization in Adaptive Data Analysis
A low cost quantum factoring algorithm
Shadow Tomography of Quantum States
Differential Privacy in Practice
Large Scale Data Integration
Hidden Markov Models Part 2: Algorithms
Shadow Tomography of Quantum States
Scott Aaronson (UT Austin)
Scott Aaronson (UT Austin) MIT, November 20, 2018
Probabilistic Models with Latent Variables
Quantum Information Theory Introduction
New Results on Learning and Reconstruction of Quantum States
New Results on Learning and Reconstruction of Quantum States
'Linear Hierarchical Models'
Certified Randomness from Quantum Supremacy Scott Aaronson (UT Austin)
Summarizing Data by Statistics
New Results on Learning and Reconstruction of Quantum States
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Online Learning of Quantum States Scott Aaronson (UT Austin)
Complexity-Theoretic Foundations of Quantum Supremacy Experiments
The Computational Complexity of Decoding Hawking Radiation
Richard Cleve DC 2117 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Lecture 24 (2009) Richard.
Workshop: A* Search.
Scott Aaronson (UT Austin) UNM, Albuquerque, October 18, 2018
Learning From Observed Data
12. Principles of Parameter Estimation
Richard Cleve DC 2117 Introduction to Quantum Information Processing CS 667 / PH 767 / CO 681 / AM 871 Lecture 16 (2009) Richard.
CS639: Data Management for Data Science
Gentle Measurement of Quantum States and Differential Privacy *
Some contents are borrowed from Adam Smith’s slides
Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Lecture 4 (2005) Richard Cleve DC 653
Differential Privacy (1)
Quantum Lower Bounds Via Laurent Polynomials
Presentation transcript:

Gentle Measurement of Quantum States and Differential Privacy Scott Aaronson (University of Texas at Austin) SQuInT, Albuquerque, NM, February 12, 2019 Joint work with Guy Rothblum (on the arXiv soon) To appear in STOC’2019

Gentle Measurement Measurements in QM are famously destructive … but not always! Information/disturbance tradeoff: outcome predictable given knowledge of   nobody needs to get hurt Given a quantum measurement M, let’s call M -gentle on a set of states S if one can implement it so that for every S and possible outcome y of M, Typical choice for S: Product states on n registers

Example: Measuring the total Hamming weight of n unentangled qubits could be extremely destructive But measuring the Hamming weight plus noise of order >>n is much safer! A good choice is Laplace noise:

Actually used now by Apple and Google… Differential Privacy A recent subfield of classical CS that wants to protect you Given an algorithm A that queries a database X=(x1,…,xn), we call A -DP if for every two databases X and X’ that differ in only a single xi, and every possible output y of A, Bad: How many of these patients have prostate cancer? Actually used now by Apple and Google… Better: Return the number of patients with prostate cancer, plus Laplace noise of average magnitude  Why it’s 1/-DP:

Quantum Differential Privacy “Protecting the Privacy Rights of Quantum States” Given a quantum measurement M on n registers, let’s call M -DP on a set of states S if for every ,’S that differ by a channel acting on only 1 register, and every possible outcome y of M, Typical choice for S: Product states on n registers Example: Once again, measuring the total Hamming weight of n unentangled qubits plus a Laplace noise term Hmmmm….

Our DPGentleness Theorem (1) M is -gentle for small   M is O()-DP (2) M is -DP on product states, and consists of a classical DP algorithm applied to the results of separate POVMs on each register  M is O(n)-gentle on product states Notes: Both directions are asymptotically tight Restriction to product states is essential for part (2) (without it we only get O(n)-gentleness) Part (2) preserves efficiency, as long as the DP algorithm’s output distribution can be efficiently QSampled

Related Work Dwork et al. (2014) made a striking connection between differential privacy and adaptive data analysis Can we safely reuse the same dataset for many scientific studies? They showed that the answer is yes—if we’re careful to access the dataset using DP algorithms only! I.e., they connected DP to “classical Bayesian gentleness” Of course, damage to a distribution D is purely internal and mental, whereas damage to a state  is often noticeable even by others measuring …

GentlenessDP The easy direction! Has nothing to do with product states or even quantum mechanics Lemma: If M is -gentle on all states, then it’s For consider the converse: if Pr[M outputs y] is very different on  and , then conditioning on M outputting y will badly “damage” (+)/2, by Bayes’ Theorem! Now just apply the lemma separately to each register of our product state

DPGentleness for Product States The harder direction; known only for measurements that apply separate POVMs to the n registers First step: Let A be an -DP classical algorithm. Then for any product distribution D=D1Dn and output y of A, Next step: Given the “QSampled” version of D, |=|1|n, and any output y of A, Can then generalize to POVMs and mixed states

Application: Shadow Tomography The Task (A. 2016): Let  be an unknown D-dimensional mixed state. Let E1,…,EM be known 2-outcome POVMs. Estimate Pr[Ei accepts ] to within  for all i[M]—the “shadows” that  casts on E1,…,EM—with high probability, by measuring as few copies of  as possible Clearly k=O(D2) copies suffice (do ordinary tomography) Clearly k=O(M) suffice (apply each Ei to separate copies) But what if we wanted to know, e.g., the behavior of an n-qubit state on all accept/reject circuits with n2 gates? Could we do

Theorem (A., STOC’2018): Shadow tomography is possible using only copies My protocol combined: The multiplicative weights update method (i.e., start with the “maximally stupid hypothesis,” 0=I/D, and then repeatedly look for opportunities to update, via postselecting on Tr(Eit)Tr(Ei) for some i) The “Quantum OR Bound” (A. 2006  Harrow et al. 2017), which repeatedly picks out an informative measurement from E1,…,EM in a gentle way

New Result: We can do shadow tomography using copies of , via a procedure that’s also online and gentle (and simpler than my previous one, and probably more amenable to experimental implementation) How it works: we take a known procedure from DP, Private Multiplicative Weights (Hardt-Rothblum 2010), which decides whether to update our current hypothesis on each query Ei using a threshold measurement with Laplace noise added We give a quantum analogue, QPMW Since each iteration of QPMW is DP (and applies separate POVMs to each register), it’s also gentle on product states, so we can safely apply all M of the iterations in sequence

Open Problems Prove a fully general DPgentleness theorem for product states? Or even near-trivialitygentleness for product states? In shadow tomography, does the number of copies of  need to have any dependence on log D? Best lower bound we can show: k=(-2 min{D, log M}). But for gentle or online shadow tomography, can use known lower bounds from DP to show that k=((log D)) is needed Composition of quantum DP algorithms? Use quantum to say something new about classical DP?