Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 Lecture 18 (2014)

Similar presentations


Presentation on theme: "1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 Lecture 18 (2014)"— Presentation transcript:

1 1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 cleve@uwaterloo.ca Lecture 18 (2014)

2 2 Entropy and compression

3 Shannon Entropy 3 Let p = ( p 1,…, p d ) be a probability distribution on a set {1,…,d} Then the (Shannon) entropy of p is H( p 1,…, p d ) Intuitively, this turns out to be a good measure of “how random” the distribution p is: vs. H(p) = log dH(p) = 0 Operationally, H(p) is the number of bits needed to store the outcome (in a sense that will be made formal shortly)

4 Von Neumann Entropy 4 For a density matrix , it turns out that S(  ) = − Tr  log  is a good quantum analogue of entropy Note: S(  ) = H( p 1,…, p d ), where p 1,…, p d are the eigenvalues of  (with multiplicity) Operationally, S(  ) is the number of qubits needed to store  (in a sense that will be made formal later on) Both the classical and quantum compression results pertain to the case of large blocks of n independent instances of data: probability distribution p  n in the classical case, and quantum state   n in the quantum case

5 Classical compression (1) 5 Let p = ( p 1,…, p d ) be a probability distribution on a set {1,…, d} where n independent instances are sampled: ( j 1,…, j n )  {1,…, d} n ( d n possibilities, n log d bits to specify one) Intuitively, there is a subset of {1,…, d} n, called the “typical sequences”, that has size 2 n (H(p) +  ) and probability 1 −  Theorem*: for all  > 0, for sufficiently large n, there is a scheme that compresses the specification to n (H(p) +  ) bits while introducing an error with probability at most  * “Plain vanilla” version that ignores, for example, the tradeoffs between n and  For example, an n -bit binary string with each bit distributed as Pr[ 0 ] = 0.9 and Pr[ 1 ] = 0.1 can be compressed to  0.47 n bits

6 Classical compression (2) 6 Define the random variable f :{1,…, d}  R as f ( j ) = − log p j Note that Define g :{1,…, d} n  R as Thus Also, g ( j 1,…, j n ) so A nice way to prove the theorem, is based on two cleverly defined random variables …

7 Classical compression (3) 7 By standard results in statistics, as n  , the observed value of g ( j 1,…, j n ) approaches its expected value, H(p) More formally, call ( j 1,…, j n )  {1,…, d} n  - typical if Then, the result is that, for all  > 0, for sufficiently large n, Pr [ ( j 1,…, j n ) is  - typical ]  1 −  We can also bound the number of these  - typical sequences: By definition, each such sequence has probability  2 − n (H(p) +  ) Therefore, there can be at most 2 n (H(p) +  ) such sequences

8 Classical compression (4) 8 In summary, the compression procedure is as follows: The input data is ( j 1,…, j n )  {1,…,d} n, each independently sampled according the probability distribution p = ( p 1,…, p d ) The compression procedure is to leave ( j 1,…, j n ) intact if it is  - typical and otherwise change it to some fixed  - typical sequence, say, ( j k,…, j k ) (which will result in an error) Since there are at most 2 n (H(p) +  )  - typical sequences, the data can then be converted into n (H(p) +  ) bits The error probability is at most , the probability of an input that is not typical arising

9 Quantum compression (1) 9 The scenario: n independent instances of a d -dimensional state are randomly generated according some distribution:  φ 1  prob. p 1     φ r  prob. p r Goal: to “compress” this into as few qubits as possible so that the original state can be reconstructed “with small error” A formal definition of the notion of error is in terms of being  -good: No procedure can distinguish between these two states (a)compressing and then uncompressing the data (b)the original data left as is with probability more than ½ + ¼   0  prob. ½  +  prob. ½ Example:

10 Quantum compression (2) 10 Define For the aforementioned example,  0.6 n qubits suffices Express  in its eigenbasis as With respect to this basis, we will define an  - typical subspace of dimension 2 n (S(  ) +  ) = 2 n (H(q) +  ) The compression method: Theorem: for all  > 0, for sufficiently large n, there is a scheme that compresses the data to n (S(  ) +  ) qubits, that is √ 2  -good

11 Quantum compression (3) 11 The  - typical subspace is that spanned by where ( j 1,…, j n ) is  - typical with respect to ( q 1,…, q d ) By the same argument as in the classical case, the subspace has dimension  2 n (S(  ) +  ) and Tr(  typ   n )  1 −  Define:  typ as the projector into the  - typical subspace  1  prob. q 1     d  prob. q d Why? Because  is the density matrix of and

12 Quantum compression (4) 12 Calculation of the “expected fidelity” for our actual mixture: Does this mean that the scheme is  ’-good for some  ’? Abbreviations used

13 Quantum compression (5) 13 The true data is of the form where the I is generated with probability The approximate data is of the form where I is generated with probability normalization factor Above two states at least as hard to distinguish as these two: Fidelity: Trace distance: Therefore the scheme is -good


Download ppt "1 Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871 Richard Cleve QNC 3129 Lecture 18 (2014)"

Similar presentations


Ads by Google