Quantum Spectrum Testing Ryan O’Donnell John Wright (CMU)

Slides:



Advertisements
Similar presentations
1 Outline relationship among topics secrets LP with upper bounds by Simplex method basic feasible solution (BFS) by Simplex method for bounded variables.
Advertisements

Estimation of Means and Proportions
Design of Experiments Lecture I
Distributional Property Estimation Past, Present, and Future Gregory Valiant (Joint work w. Paul Valiant)
The Stability of a Good Clustering Marina Meila University of Washington
1 Chapter 4 Experiments with Blocking Factors The Randomized Complete Block Design Nuisance factor: a design factor that probably has an effect.
Chapter 4 Randomized Blocks, Latin Squares, and Related Designs
Kevin Matulef MIT Ryan O’Donnell CMU Ronitt Rubinfeld MIT Rocco Servedio Columbia.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 / RAC 2211 Lecture.
Quantum Spectrum Testing Ryan O’DonnellJohn Wright Carnegie Mellon.
Chapter 8 Random-Variate Generation
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Sketching for M-Estimators: A Unified Approach to Robust Regression
The General Linear Model. The Simple Linear Model Linear Regression.
Lecture 17 Introduction to Eigenvalue Problems
RANSAC experimentation Slides by Marc van Kreveld 1.
Visual Recognition Tutorial
Symmetric Matrices and Quadratic Forms
Maximum likelihood (ML) and likelihood ratio (LR) test
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Duality Lecture 10: Feb 9. Min-Max theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum Cut Both.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Maximum likelihood (ML) and likelihood ratio (LR) test
Evaluating Hypotheses
Statistical inference form observational data Parameter estimation: Method of moments Use the data you have to calculate first and second moment To fit.
Parametric Inference.
Maximum-Likelihood estimation Consider as usual a random sample x = x 1, …, x n from a distribution with p.d.f. f (x;  ) (and c.d.f. F(x;  ) ) The maximum.
Dirac Notation and Spectral decomposition Michele Mosca.
July 3, Department of Computer and Information Science (IDA) Linköpings universitet, Sweden Minimal sufficient statistic.
Dirac Notation and Spectral decomposition
In a World of BPP=P Oded Goldreich Weizmann Institute of Science.
Maximum likelihood (ML)
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Statistical Intervals Based on a Single Sample.
T he Separability Problem and its Variants in Quantum Entanglement Theory Nathaniel Johnston Institute for Quantum Computing University of Waterloo.
1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 Lecture 18 (2014)
CHAPTER SIX Eigenvalues
© Copyright McGraw-Hill CHAPTER 6 The Normal Distribution.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 16 (2011)
General Database Statistics Using Maximum Entropy Raghav Kaushik 1, Christopher Ré 2, and Dan Suciu 3 1 Microsoft Research 2 University of Wisconsin--Madison.
Chapter 7: Sample Variability Empirical Distribution of Sample Means.
An Introduction to Support Vector Machines (M. Law)
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Ryan O’DonnellJohn Wright Carnegie Mellon. Picture by Jorge Cham a unit vector v in ℂ d (“qudit”)
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Sampling and estimation Petter Mostad
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 667 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 653 Lecture.
Machine Learning 5. Parametric Methods.
A Brief Maximum Entropy Tutorial Presenter: Davidson Date: 2009/02/04 Original Author: Adam Berger, 1996/07/05
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
A Binary Linear Programming Formulation of the Graph Edit Distance Presented by Shihao Ji Duke University Machine Learning Group July 17, 2006 Authors:
Copyright © Cengage Learning. All rights reserved. Estimation 7.
Linear, Nonlinear, and Weakly-Private Secret Sharing Schemes
Estimating standard error using bootstrap
Chapter 3: Maximum-Likelihood Parameter Estimation
Statistical Quality Control, 7th Edition by Douglas C. Montgomery.
Generalization and adaptivity in stochastic convex optimization
Background: Lattices and the Learning-with-Errors problem
Linear sketching with parities
Quantum Information Theory Introduction
Linear sketching over
Summarizing Data by Statistics
Linear sketching with parities
Uncertainty Propagation
Presentation transcript:

Quantum Spectrum Testing Ryan O’Donnell John Wright (CMU)

unknown mixed state experimental apparatus You suspect that:, for some fixed has low von Neumann entropy is low rank or Q: How to check your prediction? A: Property testing of mixed states.

Property testing of mixed states Proposed by [Montanaro and de Wolf 2013] Given: ability to generate independent copies of. Want to know: does satisfy property ? Goal: minimize # of copies used. (ignore computational efficiency)

This paper: properties of spectra, for some fixed spectral propertiesnot spectral properties is diagonal, where is the maximally mixed state has low von Neumann entropy is low rank ✔ ✗ ✔

Spectral decomp:, where. Spectrum gives a probability distribution over ’s. Def: Q: Suppose has spectrum. How close is its spectrum to ?

Quantum spectrum testing A tester for property is a quantum algorithm given which distinguishes between: (i) has property. (ii) for every which has property. Goal: minimize.

Quantum spectrum testing A tester for property is a quantum algorithm given which distinguishes between: (i) has property. (ii) for every which has property. equivalent (not obvious)

Link to probability distributions Given, suppose you knew ’s eigenbasis. Measuring in this basis: receive w/prob testing properties of spectrum given samples from spectrum probability distribution

Property testing of probability distributions Probability distribution over. Empirical distribution -close to after samples. Can test uniform with samples. [Pan] (Testing equality to any known distribution possible in samples [VV]) entropy, support size, etc.

Prior work & our results

Some useful algorithms Tomography: estimate up to -accuracy. uses copies EYD algorithm: estimate ’s spectrum up to -accuracy uses copies [ARS][KW][HM][CM] Weak Schur sampling: samples a “shifted histogram” from ’s spectrum.

EYD algorithm: estimate ’s spectrum up to -accuracy Our thm: 1.“New” proof of upper bound. 2.EYD algorithm requires copies. Spectrum testing: easy when is quadratic (in ) What about subquadratic algorithms? uses copies [ARS][KW][HM][CM]

A subquadratic algorithm Q-Bday: distinguish between is maximally mixed (i.e. ) is maximally mixed on subspace of dim (i.e. ) Thm:[CHW] copies are necessary & sufficient to solve Q-Bday. Gives linear lower bounds for testing if: maximally mixed is low entropy is low rank etc…

A subquadratic algorithm Q-Bday: distinguish between is maximally mixed (i.e. ) is maximally mixed on subspace of dim (i.e. ) Thm:[CHW] copies are necessary & sufficient to solve Q-Bday. Q-Bday’: Our Thm: copies are necessary & sufficient to solve Q-Bday’. (+ interpolate between Q-Bday and Q-Bday’)

Property testing results Thm: samples to test if is maximally mixed. (i.e. ). Thm: samples to test if is rank r (with one-sided error).

Weak Schur sampling

Weak Schur sampling: samples a “shifted histogram” from ’s spectrum. shifted histogram “ ” weak Schur sampling Given : 1.) Measure using weak Schur sampling 2.) Say YES or NO based Canonical algorithm: [CHW]: Canonical algorithm is optimal for spectrum testing

Shifted histograms Given samples from a probability distribution Histogram: for each sample, place a block in column Shifted histogram: for each sample, sometimes “mistake” it for one of. e.g.: given sample shifted histogram

Shifted histograms Precise pattern of mistakes given by RSK algorithm. (well-known combinatorial algorithm) The more samples, the fewer mistakes are made Shifted histograms look like normal histograms when given many samples

Weak Schur sampling Given with eigenvalues, WSS is distributed as: 1.) Set (probability dist. on ) 2.) Sample. 3.) Output, the shifted histogram of. Def: is the output distribution of ’s

Weak Schur sampling, e.g. Case 1: ’s spectrum is. histogramshifted histogram sample

Weak Schur sampling, e.g. Case 2: ’s spectrum is. histogram shifted histogram sample

Summary (so far) Canonical algorithm (WSS) Outputs (random) shifted histogram Shifted histogram distribution: combinatorial description Try to carry over intuition from histogram to shifted histogram

Techniques

Testing mixedness Distinguish: 1.) ( usually flat) 2.) is -far from ( usually not flat) Idea: Notation: # of blocks in column histogram drawn from unif. dist. is “flat” maybe shifted histogram is also flat? Def: is flat if is small

Testing mixedness Distinguish: 1.) ( usually flat) 2.) is -far from ( usually not flat) Idea: Notation: # of blocks in column histogram drawn from unif. dist. is “flat” maybe shifted histogram is also flat? Def: is flat if is small

Testing mixedness Distinguish: 1.) ( usually flat) 2.) is -far from ( usually not flat) Idea: Notation: # of blocks in column histogram drawn from unif. dist. is “flat” maybe shifted histogram is also flat? Def: is flat if is small

Taking expectations Goal: show is different in two cases Problem: no formulas for ! For one of our lower bounds, we need to compute How to take expectations?

Kerov’s algebra of observables are “polynomial functions” in ’s parameters Other families of polynomial functions:,,, polynomials, and more!

Kerov’s algebra of observables are “polynomial functions” in ’s parameters Other families of polynomial functions:,,, polynomials, and more! gives “moments” of

Kerov’s algebra of observables are “polynomial functions” in ’s parameters Other families of polynomial functions:,,, polynomials, and more! “geometric” info of

Kerov’s algebra of observables are “polynomial functions” in ’s parameters Other families of polynomial functions:,,, polynomials, and more! representation theoretic info about

Kerov’s algebra of observables are “polynomial functions” in ’s parameters Other families of polynomial functions:,,, polynomials, and more! Various conversion formulas between these polynomials Can compute expectations! polysexpectation

Conclusion Import techniques from math to compute “Schur-Weyl expectations” with applications to property testing. Lots of interesting open problems.