Presentation is loading. Please wait.

Presentation is loading. Please wait.

Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003.

Similar presentations


Presentation on theme: "Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003."— Presentation transcript:

1 Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003

2 Classical data compression Asymptotically optimal: n copies of {p i } are compressed to n(H({p i })+  n ) bits with  n error where  n,  n ! 0 as n !1. Computationally efficient: running time is polynomial in n. Universal: algorithms work for any i.i.d. source.

3 Example: Huffman coding Letter (i)Probability (p i )Codeword A1/20 B1/410 C1/8110 D1/8111 Map letter i to a prefix-free codeword of length -log p i. The expected number of bits per symbol is -  i p i logp i =H(p i ) and the standard deviation is O( p n). A B C D 0 0 0 1 1 1

4 Quantum Data Compression A quantum source  =  j q j |  j ih  j | can be diagonalized  =  i p i |i ih i| and treated as a classical source. The main difference between quantum and classical Huffman coding is that measuring the length of the output will damage the state. Also, Schumacher compression assumes we know the basis in which  is diagonal. Therefore it is optimal and efficient, but not universal.

5 Universal quantum data compression JHHH showed that for any R, there exists a space H n,R of dimension 2 nR+o(n) such that if S(  )<R then most of the support of  ­ n lies in H n,R. (quant-ph/9805017) HM showed that you don’t need to know the entropy. However, they present no efficient implementation. (quant-ph/0209124) JP give an efficient, universal algorithm, but we like ours better. (quant-ph/0210196)

6 Goal: Efficient, Universal quantum data compression Modeled after classical Huffman coding. Pass 1: Read the data, determine the type class (empirical distribution) and decide on a code. Pass 2: Encode

7 Step 1: Gentle tomography Problem: Quantum state tomography damages the states it measures. Solution: use weak measurements. For example, in NMR the average value of  x can be approximately measured for 10 20 spins without causing decoherence.

8 Gentle tomography algorithm Reduce tomography to estimating d 2 observables of the form tr  k. For each estimate: –Randomly divide the interval 0,…,n into n 1/4 different bins. –Measure only the bin that tr  k falls into. 0 ntr  k n bin width O (n 3/4 ) uncertainty O (n 1/2 ) n 1/4 random bin boundaries

9 Gentle tomography lemma If a projective measurement {M j } has high probability of obtaining a particular outcome k, then obtaining k leaves the state nearly unchanged. (Ahlswede, Winter) Thus, if we are very likely to estimate the state correctly, then this estimation doesn’t cause very much damage.

10 Implementation … ­n­n … |0 i ­ log n +1 Weakly Measure Example of a circuit to gently estimate h 1|  |1 i.

11 Result An information-disturbance tradeoff curve: error ¢ disturbance>n -1/2 (up to logarithmic factors). (Can we prove this is optimal?) In particular, we can set both error and damage equal to n -1/4 log n.

12 Step 2: Compression Given an estimate  with |  -  |< , how do we compress  ­ n ?

13 Huffman coding with an imperfect state estimate Suppose we encode  ­ n according to  =  i q i |i ih i|. Codeword i has length -log q i and occurs with probability h i|  |i i. Thus the expected length is  i -log q i h i|  |i i = -tr(  log  ) = S(  ) + S(  ||  ). Unfortunately, S(  ||  ) is not a continuous function of .

14 Dealing with imperfect estimates. Replace  with (1-  )  +  I/d. This makes the rate no worse than S(  )+  log 1/ .

15 Result A polynomial time algorithm that compresses  ­ n into n ( S(  )+ O (n -s log 2 n) ) qubits with error O (n s-1/2 log n).

16 Other methods Schumacher coding and the JHHH method both achieve S(  )+ O (kn -1/2 ) qubits/signal and exp(-k 2 ) error. The HM method has roughly the same rate- disturbance trade-off curve, though their overflow probability is lower. The JP method achieves error n -e and rate S(  )+n -r when e=1/2+r(1+d 2 ). For example, compressing qubits with constant error gives rate S(  )+ O (n -1/10 ).

17 Future directions Stationary ergodic sources. Likewise, a visible quantum coding theory exists, but the only algorithm is classical Lempel-Ziv. Proving converses: –No exponentially vanishing error. –No on-the-fly compression. –Information/disturbance and rate/disturbance trade-offs.

18 A Quantum Clebsch-Gordon transform (Bacon & Chuang) H ­ n = © ` n A ­ B. is a partition of n into d parts, A is an irreducible representation of SU(d) and B is an irrep of S n. Wanted: an efficient quantum circuit to map |i 1 …i n i!  | i |a i |b i. Useful for universal compression, state estimation, hypothesis testing, and entanglement concentration/dilution/distillation.


Download ppt "Gentle tomography and efficient universal data compression Charlie Bennett Aram Harrow Seth Lloyd Caltech IQI Jan 14, 2003."

Similar presentations


Ads by Google