Download presentation
Presentation is loading. Please wait.
Published byErin Florence Price Modified over 9 years ago
1
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 cleve@cs.uwaterloo.ca Lecture 16 (2011)
2
2 Distinguishing between two arbitrary quantum states
3
Holevo-Helstrom Theorem (1) 3 Theorem: for any two quantum states and , the optimal measurement procedure for distinguishing between them succeeds with probability ½ + ¼ || − || tr (equal prior probs.) Proof* (the attainability part): Since − is Hermitian, its eigenvalues are real Let + be the projector onto the positive eigenspaces Let − be the projector onto the non-positive eigenspaces Take the POVM measurement specified by + and − with the associations + and − * The other direction of the theorem (optimality) is omitted here
4
Holevo-Helstrom Theorem (2) 4 Claim: this succeeds with probability ½ + ¼ || − || tr A key observation is Tr( + − − ) ( − ) = || − || tr Proof of Claim: Therefore, p s − p f = ½Tr( + − − ) ( − ) = ½|| − || tr From this, the result follows The success probability is p s = ½Tr( + ) + ½Tr( − ) & the failure probability is p f = ½Tr( + ) + ½Tr( − )
5
Purifications & Ulhmann’s Theorem 5 Any density matrix , can be obtained by tracing out part of some larger pure state: a purification of Ulhmann’s Theorem*: The fidelity between and is the maximum of φ ψ taken over all purifications ψ and φ Recall our previous definition of fidelity as F( , ) = Tr √ 1/2 1/2 || 1/2 1/2 || tr * See [Nielsen & Chuang, pp. 410-411] for a proof of this
6
Relationships between fidelity and trace distance 6 1 − F( , ) || − || tr √1 − F( , ) 2 See [Nielsen & Chuang, pp. 415-416] for more details
7
7 Entropy and compression
8
Shannon Entropy 8 Let p = ( p 1,…, p d ) be a probability distribution on a set {1,…,d} Then the (Shannon) entropy of p is H( p 1,…, p d ) Intuitively, this turns out to be a good measure of “how random” the distribution p is: vs. H(p) = log dH(p) = 0 Operationally, H(p) is the number of bits needed to store the outcome (in a sense that will be made formal shortly)
9
Von Neumann Entropy 9 For a density matrix , it turns out that S( ) = − Tr log is a good quantum analog of entropy Note: S( ) = H( p 1,…, p d ), where p 1,…, p d are the eigenvalues of (with multiplicity) Operationally, S( ) is the number of qubits needed to store (in a sense that will be made formal later on) Both the classical and quantum compression results pertain to the case of large blocks of n independent instances of data: probability distribution p n in the classical case, and quantum state n in the quantum case
10
Classical compression (1) 10 Let p = ( p 1,…, p d ) be a probability distribution on a set {1,…,d} where n independent instances are sampled: ( j 1,…, j n ) {1,…,d} n ( d n possibilities, n log d bits to specify one) Theorem*: for all > 0, for sufficiently large n, there is a scheme that compresses the specification to n (H(p) + ) bits while introducing an error with probability at most A nice way to prove the theorem, is based on two cleverly defined random variables … Intuitively, there is a subset of {1,…,d} n, called the “typical sequences”, that has size 2 n (H(p) + ) and probability 1 − * “Plain vanilla” version that ignores, for example, the tradeoffs between n and
11
Classical compression (2) 11 Define the random variable f :{1,…,d} R as f ( j ) = − log p j Note that Also, g ( j 1,…, j n ) Define g :{1,…,d} n R as Thus
12
Classical compression (3) 12 By standard results in statistics, as n , the observed value of g ( j 1,…, j n ) approaches its expected value, H(p) More formally, call ( j 1,…, j n ) {1,…,d} n - typical if Then, the result is that, for all > 0, for sufficiently large n, Pr [ ( j 1,…, j n ) is - typical ] 1 − We can also bound the number of these - typical sequences: By definition, each such sequence has probability 2 − n (H(p) + ) Therefore, there can be at most 2 n (H(p) + ) such sequences
13
Classical compression (4) 13 In summary, the compression procedure is as follows: The input data is ( j 1,…, j n ) {1,…,d} n, each independently sampled according the probability distribution p = ( p 1,…, p d ) The compression procedure is to leave ( j 1,…, j n ) intact if it is - typical and otherwise change it to some fixed - typical sequence, say, ( j,…, j) (which will result in an error) Since there are at most 2 n (H(p) + ) - typical sequences, the data can then be converted into n (H(p) + ) bits The error probability is at most , the probability of an atypical input arising
14
Quantum compression (1) 14 The scenario: n independent instances of a d -dimensional state are randomly generated according some distribution: φ 1 prob. p 1 φ r prob. p r Goal: to “compress” this into as few qubits as possible so that the original state can be reconstructed with small error in the following sense … -good: No procedure can distinguish between these two states (a)compressing and then uncompressing the data (b)the original data left as is with probability more than ½ + ¼ 0 prob. ½ + prob. ½ Example:
15
Quantum compression (2) 15 Express in its eigenbasis as Define Theorem: for all > 0, for sufficiently large n, there is a scheme that compresses the data to n (S( ) + ) qubits, that is 2 √ -good For the aforementioned example, 0.6 n qubits suffices With respect to this basis, we will define an - typical subspace of dimension 2 n (S( ) + ) = 2 n (H(q) + ) The compression method:
16
Quantum compression (3) 16 The - typical subspace is that spanned by where ( j 1,…, j n ) is - typical with respect to ( q 1,…, q d ) By the same argument as in the classical case, the subspace has dimension 2 n (S( ) + ) and Tr( typ n ) 1 − Define typ as the projector into the - typical subspace 1 prob. q 1 d prob. q d This is because is the density matrix of
17
Quantum compression (4) 17 Proof that the scheme is 2 √ -good: (This is to be inserted)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.