Presentation is loading. Please wait.

Presentation is loading. Please wait.

Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD

Similar presentations


Presentation on theme: "Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD"— Presentation transcript:

1 Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD (ylai@amsc.umd.edu)ylai@amsc.umd.edu Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ (razi.haimi-cohen@alcatel-lucent.com)razi.haimi-cohen@alcatel-lucent.com August 11, 2011

2 Why is entropy coding important?

3 101001001 channel Transmission is digital

4 101001001 channel 1101 channel 1101

5 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel

6 Statistics of Compressed Sensed blocks

7 +---+ 53431 - + 13221 + = Input (integers between 0 and 255) CS Measurement (integers between -1275 and 1275) 18

8 Since CS measurements contain noise from pixel quantization, quantize at most to standard deviation of this noise standard deviation of noise from pixel quantization in CS measurements N is total pixels in video block

9 We call this minimal quantization step the “normalized” quantization step

10 What to do with values outside range of quantizer? large values that rarely occur quantizer range

11 ** “Democracy in Action: Quantization, Saturation, and Compressive Sensing,” Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk (Rice University, August 2009) CS measurements are “democratic” ** Each measurement carries the same amount of information, regardless of its magnitude

12 What to do with values outside range of quantizer? Discard values, small PSNR loss since occurrence rare quantizer range

13 Simulations

14 2 second video broken into 8 blocks 288 pixels 352 pixels 60 frames (2 seconds)

15 Ratio of CS measurements to number of input values 0.150.250.350.45 PSNR Bit Rate “Normalized” quantization step multiplier 110100200500 PSNR Bit Rate Range of quantizer (std dev of measured Gaussian distribution) 1.01.52.03.0 PSNR Bit Rate

16 Processing Time 6 cores, 100 GB RAM 80 simulations (5 ratios, 4 steps, 4 ranges) 22 hours total –17 minutes per simulation –8.25 minutes per second of video

17 Results

18 Fraction of CS measurements outside quantizer range 2.7 million CS measurements

19 Fraction of CS measurements outside quantizer range

20 How often do large values occur theoretically? 34.13% 13.59% 2.14% 0.135% 2.14% 0.135%13.59%

21 How often do large values occur in practice? (theoretical) 2.7 million CS measurements (0.135%) 0.037%

22 What to do with large values outside range of quantizer? Discard values, small PSNR loss since occurrence rare quantizer range

23 Discarding values comes at bit rate cost 1001010110110101010011101000101 discard 0010100001010101000101001000100 0100100010110010101010010101010 1001010110110101010011101000101 discard

24 Bits Per Measurement, Bits Per Used Measurement

25 9.4 bits

26 Best Compression (Entropy) of Quantized Gaussian Variable X Arithmetic Coding is viable option !

27 Fix quantization step, vary standard deviation Faster arithmetic encoding, less measurements

28 PSNR versus Bit Rate (10 x step size)

29 Fixed bit rate, what should we choose? 18.5 minutes, 121 bins 2.1 minutes, 78 bins

30 Fix standard deviation, vary quantization step Increased arithmetic coding efficiency, more error

31 PSNR versus Bit Rate (2 std dev)

32 Fixed PSNR, which to choose? Bachelor’s Master’s PhD

33 Demo

34 Tune decoder  to take quantization noise into account.  make use of out-of-range measurements Improve computational efficiency of arithmetic coder Future Work

35 Questions?

36 Supplemental Slides (Overview of system)

37 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video For each block: 1) Output of arithmetic encoder 2) mean, variance 3) DC value 4) sensing matrix identifier channel

38 Supplemental Slides (Statistics of CS Measurements)

39 “News” Test Video Input Block specifications –64 width, 64 height, 4 frames (16,384 pixels) –Input 288 width, 352, height, 4 frames (30 blocks) Sampling Matrix –Walsh Hadamard Compressed Sensed Measurements –10% of total pixels = 1638 measurements

40 Histograms of Compressed Sensed Samples (blocks 1-5)

41 Histograms of Compressed Sensed Samples (blocks 6-10)

42 Histograms of Compressed Sensed Samples (blocks 11-15)

43 Histograms of Compressed Sensed Samples (blocks 21-25)

44 Histograms of Compressed Sensed Samples (blocks 26-30)

45 Histograms of Compressed Sensed Samples (blocks 16-20)

46 Histograms of Standard Deviation and Mean (all blocks)

47 Supplemental Slides (How to Quantize)

48 Given a discrete random variable X, the fewest number of bits (entropy) needed to encode X is given by: For a continuous random variable X, differential entropy is given by

49 Differential Entropy of Gaussian function of variance maximizes entropy for fixed variance i.e. h(X’) <= h(X) for all X’ with fixed variance

50 Approximate quantization noise as i.i.d. with uniform distribution where w is width of quantization interval. Then, Variance from initial quantization noise

51 How much should we quantize? input pixels are integers measurement matrix is Walsh-Hadamard

52 Supplemental Slides (Entropy of arithmetic encoder)

53 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel What compression to expect from arithmetic coding?

54 Entropy of Quantized Random Variable - continuous random variable - uniformly quantized (discrete)

55 Entropy of Quantized Random Variable differential (continuous) entropy - continuous random variable - uniformly quantized (discrete) quantization step size

56 Entropy of Quantized Random Variable - 5667^2 (average variance of video blocks) - 16,384 pixels in video block 7 bit savings from quantization

57 Supplemental Slides (Penalty for wrong distribution)

58 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel What is penalty for using wrong probability distribution?

59 Let p and q have normal distributions Then,

60 Entropy of random variable Expected length of wrong codeword Kullback-Leiber divergence (penalty) Assume random variable X has probability distribution p but we encode with distribution q

61 Worst case scenario for video blocks of “News”

62 Summary

63 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel What are statistics of measurements? Gaussian with different means and variances

64 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel How much should we quantize? By square root of total number of pixels in video block

65 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel What compression to expect from arithmetic coding? 14.5 bits/measurement (integer quantization) 7 bits/measurement (quantization)

66 Break video into blocks Take compressed sensed measurements Quantize measurements Arithmetic decode Arithmetic encode L1 minimization Deblock Input video Output video channel What is penalty for using wrong probability distribution? For “News”, must send extra 2 bits/measurement


Download ppt "Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD"

Similar presentations


Ads by Google