Download presentation
Presentation is loading. Please wait.
Published byHannah Madison Copeland Modified over 9 years ago
1
Quantum Spectrum Testing Ryan O’DonnellJohn Wright Carnegie Mellon
2
Picture by Jorge Cham qudit
3
one of: |1, |2, …, |d unknown orthonormal vectors in ℂ d
4
Measuring the qudit…... yields probabilistic info about which of |1, …, |d it is.
5
Actual output: p 1 p 2 · · · p d |1 |2 · · · |d
6
Actual output: p 1 p 2 · · · p d |1 |2 · · · |d An unknown probability distribution over an unknown set of d orthonormal vectors. (Can represent by the “density matrix” ρ = p 1 |11| + p 2 |22| + · · · + p d |dd|, a PSD matrix with spectrum {p 1, p 2, …, p d }. But we won’t emphasize this notation.)
7
Actual output: p 1 p 2 · · · p d |1 |2 · · · |d An unknown probability distribution over an unknown set of d orthonormal vectors. It’s expensive, but you can hit the button n times. d=3, n=7 example: |1|3|2|2|1|1|3 ∈ ( ℂ 3 ) ⊗ 7 with prob. p 1 p 3 p 2 p 2 p 1 p 1 p 3
8
Quantum Questions #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. (More precisely, ρ.) (Approximately, up to some, w.h.p.)
9
Quantum Questions #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. #2: Spectrum Learning. Learn the multiset {p 1, …, p d }. (Approximately, up to some, w.h.p.)
10
Quantum Questions #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. #2: Spectrum Learning. Learn the multiset {p 1, …, p d }. #3: Spectrum Testing. Determine if {p 1, …, p d } satisfies a certain property. E.g., is (In the Property Testing model.)
11
Quantum Questions #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. #2: Spectrum Learning. Learn the multiset {p 1, …, p d }. #3: Spectrum Testing. Determine if {p 1, …, p d } satisfies a certain property. E.g., is This one is called the “maximally mixed state”.
12
Quantum Questions #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. #2: Spectrum Learning. Learn the multiset {p 1, …, p d }. #3: Spectrum Testing. Determine if {p 1, …, p d } satisfies a certain property. E.g.,has ≤ r nonzeros. This one is called “having rank ≤ r”.
13
Actual output: p 1 p 2 · · · p d |1 |2 · · · |d An unknown probability distribution. An unknown set of d orthonormal vectors.
14
Actual output: p 1 p 2 · · · p d |1 |2 · · · |d An unknown probability distribution. If the vectors |1, |2, …, |d are known, you can measure the outcomes exactly. Setup becomes equivalent to: Classical Distribution Learning/Testing
15
#1: Learn p 1, …, p d (approximately, w.h.p.) [Folklore]: n = Θ(d) necessary & sufficient. Classical Distribution Questions #2: Learn the multiset {p 1, …, p d }. (If you only care about “symmetric” properties.) [RRSS08,Valiants]: still necessary. #3: Determine if {p 1, …, p d } satisfies a certain property. E.g., is [Paninski08]: nec. & sufficient.
16
Quantum Questions & Answers #1: Quantum Tomography. Learn |1, …, |d, p 1, …, p d. [Folklore]: necessary. [FGLE12]: sufficient. There have been claims that d 2 suffices… Stay tuned…
17
Quantum Questions & Answers #2: Spectrum Learning. Learn the multiset {p 1, …, p d }. [ARS88,KW01]: Suggested a natural algorithm. [HM02,CM06]: Showed it works with. [O.-Wright’15]: Simpler analysis, gets. Shows algorithm fails if. (But maybe another algorithm could work?)
18
Quantum Questions & Answers #3: Spectrum Testing. (Main focus of our work.) [CHW06]: Distinguishing Unif(d) := from Unif(d/2): n = Θ(d) nec. & suff. [Us]: Distinguishing Unif(d) from Unif(d−Δ): nec. & suff. (for 1 ≤ Δ ≤ d/2). [Us]: (Analogue of Paninski’s theorem.) Distinguishing Unif(d) from -far-from-Unif(d): nec. & suff.
19
Plan for rest of the talk Not going to prove any of the theorems. Just going to try to explain the setup. Will start by explaining quantum measurement.
20
Quantum Measurement in ℂ d Measurer selects an orthogonal decomposition of ℂ d into subspaces, ℂ d = S 1 ⊕ S 2 ⊕ · · · ⊕ S m If the unknown unit state vector is |v, measurer observes “S j ” with probability || Proj S j ( |v ) || 2
21
ℂ2ℂ2 ℝ2ℝ2 say this is the unknown |1 say we measure with coord. axis decomp S 1 ⊕ S 2 S1S1 S2S2 observe S 1 with prob. 1/4 S 2 with prob. 3/4
22
ℝ2ℝ2 say this is the unknown |1 say we measure with coord. axis decomp S 1 ⊕ S 2 S1S1 S2S2 observe S 1 with prob. (1/3)·(1/4) + (2/3)·(3/4) |2 p 1 =1/3: p 2 =2/3:
23
ℝ2ℝ2 say this is the unknown |1 Note that if |1, |2 are known, we’d just measure with that decomposition. S2S2 S1S1 |2 p 1 =1/3: p 2 =2/3: Get S 1 with prob. p 1, S 2 with prob. p 2, Exactly the classical setup.
24
|1|3|2|2|1|1|3 ∈ ( ℂ 3 ) ⊗ 7 Perhaps d=3, n=7, and result of experiment is
25
|1|3|2|2|1|1|3 ∈ ( ℂ 3 ) ⊗ 7 Perhaps d=3, n=7, and result of experiment is |1322113
26
|1|3|2|2|1|1|3 ∈ ( ℂ 3 ) ⊗ 7 Perhaps d=3, n=7, and result of experiment is If you want, you can measure each qudit separately. However, it’s more effective to do one giant measurement on ( ℂ 3 ) ⊗ 7. |1322113
27
n = 2 Fix also d = 3, for concreteness. Unknown state lies in ( ℂ 3 ) ⊗ 2, a 9-dim. space: span(|11,|12,|13,|21,|22,|23,|31,|32,|33) There’s a certain subspace of ( ℂ 3 ) ⊗ 2 called Sym 2 ( ℂ 3 ): the symmetric subspace. One definition: All vectors invariant under S 2 (snob’s notation for permutations of {1,2}).
28
n = 2 Sym 2 ( ℂ 3 ): the symmetric subspace. An orthonormal basis for it: dim Sym 2 ( ℂ 3 ) = 6 Proj Sym 2 ( ℂ 3 ) ( |v )
29
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ Problem (?): The basis |1, |2, |3 is unknown, so how does measurer specify it? No problem: Sym 2 ( ℂ 3 ) has a basis-free definition; it’s span{v ⊗ v : v ∈ℂ 3 }. (This measurement also called the “SWAP test”.)
30
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ with prob., state is |11, || Proj Sym 2 ( ℂ 3 ) ( |11 ) || 2 = 1
31
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ with prob., state is |12, || Proj Sym 2 ( ℂ 3 ) ( |12 ) || 2 = with prob., state is |11, || Proj Sym 2 ( ℂ 3 ) ( |11 ) || 2 = 1 etc.Add it all up…
32
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ Pr [observing Sym 2 ( ℂ 3 )] Minimized iff {p 1,p 2,p 3 } = {1,3,1}. (Using this, can -test Unif(d) with O(d 2 / 4 ) copies.)
33
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ Why just decompose into 6dim + 3dim? It can’t hurt to further decompose into 9 1-dimensional subspaces. But actually… It’s without loss of generality! To explain why, let’s detour to the classical case.
34
Detour: When testing / learning an S d -invariant property of a classical probability distribution p 1, …, p d, i.e., one depending only on {p 1, …, p d }, without loss of generality you can throw away a lot of information you see, and just remember the “orbit” under S n × S d.
35
Typical sample when n=20, d=5 might be… 54423131423144554251 Intuition 1: Permuting the n positions doesn’t matter. Hence may as well only retain histogram. Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.)
36
Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.) 1234512345 Typical sample when n=20, d=5 might be… 54423131423144554251
37
Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.) Typical sample when n=20, d=5 might be… 1234512345 Intuition 2: Since property is symmetric, permuting the d symbols doesn’t matter. Hence may as well sort the histogram.
38
Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.) Typical sample when n=20, d=5 might be… 1234512345
39
Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.) Typical sample when n=20, d=5 might be… 1 st most freq: 2 nd most freq: 3 rd most freq: 4 th most freq: 5 th most freq:
40
Say we’re trying to test if distribution p 1, …, p d is Unif(d). (Or any property invariant under relabeling [d]; e.g., “support ≤ r”.) Typical sample when n=20, d=5 might be… λ 1 := 1 st most freq: λ 2 := 2 nd most freq: λ 3 := 3 rd most freq: λ 4 := 4 th most freq: λ 5 := 5 th most freq: (A sorted histogram is AKA a Young diagram.)
41
Claim: An n-sample algorithm for testing an “S d -invariant” property of p 1, …, p d (one depending only on {p 1, …, p d }) may ignore full sample and just retain the sorted histogram (Young diagram). Formal proof: If algorithm succeeds with high probability, can check it also succeeds with equally high probability if it first blindly applies a random perm from S n to the positions and a random perm from S d to the symbols. But then conditioned on the sorted histogram of the sample it sees, the sample is uniformly random among those with that sorted histogram. So an algorithm that only sees the sorted histogram can also succeed with equally high probability.
42
By the way, for a given sorted histogram / Young diagram λ = (λ 1, λ 2, …, λ d ), Pr [observing λ] is a nice symmetric polynomial in p 1, …, p d depending on λ.
43
End detour; back to n=2 tests for qubit’s spectrum
44
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ Why is this without loss of generality? Permutations of the 2 copies shouldn’t matter. Arbitrary unitary transformations on ℂ 3 shouldn’t matter, because algorithm should work for any unknown orthonormal |1, |2, |3.
45
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ An algorithm may as well apply a random permutation π ∈ S 2 to the 2 copies, and a random unitary U ∈ U 3 to each qudit. Easy exercise: If original state |v|w had length ℓ in Sym 2 ( ℂ 3 ), randomization’s component in Sym 2 ( ℂ 3 ) will be a totally (Haar)-random length-ℓ vector.
46
n = 2 n = 2 idea: Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 2 = Sym 2 ( ℂ 3 ) ⊕ Sym 2 ( ℂ 3 ) ⊥ ∴ WLOG we needn’t further decompose Sym 2 ( ℂ 3 ). We actually need the same result for Sym 2 ( ℂ 3 ) ⊥. AKA Alt 2 ( ℂ 3 ) = {v ∈ ( ℂ 3 ) ⊗ 2 : πv = sgn(π)v ∀ π ∈ S 2 }. It’s an equally easy exercise.
47
?n = 3? Measure w.r.t. the decomposition ? ( ℂ 3 ) ⊗ 3 = Sym 3 ( ℂ 3 ) ⊕ Alt 3 ( ℂ 3 ) ? Unfortunately, that’s not a decomposition. dim. 27dim. 10dim. 1 What’s missing: a 16-dim. space called S (2,1) ( ℂ 3 ).
48
n = 3 Measure w.r.t. the decomposition ( ℂ 3 ) ⊗ 3 = Sym 3 ( ℂ 3 ) ⊕ Alt 3 ( ℂ 3 ) ⊕ S (2,1) ( ℂ 3 ) One can show that this is WLOG.
49
General n, d Measure w.r.t. the decomposition ( ℂ d ) ⊗ n = Sym n ( ℂ d ) ⊕ Alt n ( ℂ d ) ⊕ ···stuff··· yada yada yada about the groups S n and U d, yada yada yada about representation theory, yada yada yada about “Schur–Weyl duality”, I will give you the tl;dr.
50
General n, d There’s an explicit subspace decomposition such that it’s WLOG to measure w.r.t. it. Interestingly, it’s indexed by Young diagrams: = {diagrams with n boxes and d rows} All of this was observed in [CHW06].
51
Quantum spectrum testing You’re testing properties of {p 1, …, p d }. For n copies, WLOG you observe a random Young diagram with n boxes and d rows. For each such λ, Pr [observing λ] is an explicit symmetric polynomial in p 1, …, p d. We can also describe this distribution on λ in a simple quantum-free way…
52
The distribution on λ Pick w ~ [d] n with symbols probs. i.i.d. p 1, …, p d. Classical testing: you just get to see w. n=20, d = 5 e.g.: w = 54423131423144554251
53
The distribution on λ λ 1 = longest incr. subseq λ 1 + λ 2 = longest union of 2 incr. subseqs λ 1 + λ 2 + λ 3 = longest union of 3 incr. subseqs · · · λ 1 + λ 2 + λ 3 + · · · + λ d = n Quantum testing: you see λ defined by… Pick w ~ [d] n with symbols probs. i.i.d. p 1, …, p d. n=20, d = 5 e.g.: w = 54423131423144554251
54
The distribution on λ 54423131423144554251 Quantum testing: you see λ defined by… Pick w ~ [d] n with symbols probs. i.i.d. p 1, …, p d. n=20, d = 5 e.g.: w = 54423131423144554251 (Alternatively, you see “RSK”(w).)
55
Quantum spectrum testing You’re testing properties of {p 1, …, p d }. For n copies, WLOG you observe a random Young diagram λ with n boxes and d rows. λ encodes L.I.S. information of a random word. You now know everything you need to know. Go forth and prove testing upper/lower bounds!
56
Our subsequent techniques Kerov’s algebra of polynomials on Young diagrams Robinson–Schensted–Knuth correspondence Modified Frobenius coordinates and Maya notation Fourier analysis over S n Gaussian Unitary Ensemble Littlewood–Richardson structure constants Okounkov–Olshanski Binomial Formula Partition 2-quotients Cyclic sieving phenomenon I kind of line-by-lined all our proofs at some point. For more details, you might prefer to ask John… You know the setup, maybe you can simplify it all.
57
Thanks!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.