Presentation is loading. Please wait.

Presentation is loading. Please wait.

Quantization and error Last updated on June 15, 2010 Doug Young Suh 8/30/2015.

Similar presentations


Presentation on theme: "Quantization and error Last updated on June 15, 2010 Doug Young Suh 8/30/2015."— Presentation transcript:

1 Quantization and error Last updated on June 15, 2010 Doug Young Suh suh@khu.ac.kr 8/30/2015

2  amount of information = degree of surprise  Entropy and average code length  Information source and coding Memoryless source : no correlation 8/30/2015 Media Lab. Kyung Hee University 2 Entropy and compression Red blue yellow yellow red black red 00011010001100 ∙∙∙∙∙ ∙∙∙

3  Dice vs. coin  Effects of quantization Data compression Information loss, but not all 8/30/2015 Media Lab. Kyung Hee University 3 Fine-to-coarse Quantization 1/6 1 2 3 4 5 6 {1,2,3}  head {4,5,6}  tail 1/2 H T 3 5 2 1 5 4 ∙∙∙ H T H H T T ∙∙∙ quantization

4  analog-to-digit-al quantization In order to cook in binary computers digital TV, digital comm., digital control…  fine-to-coarse digital quantization 8/30/2015 Media Lab. Kyung Hee University 4 Quantization ADC Infinite numbers finite numbers

5  Digital Selectable accuracy : scale for human vs. gold [dynamic range, required accuracy, pdf]  open questions 1) Weights of soldiers are ranged from 50 kg to 100 kg, while that of new born baby is less than 5kg. 2) Voice signal of mobile phones is quantized in 8bits, while CD quality audio is quantized in 16bits. Why is 8bits enough for voice? 8/30/2015 Media Lab. Kyung Hee University 5 Quantization

6 Quantization/de-quantization 8/30/2015 Media Lab. Kyung Hee University 6  Representing values and error (-5kg ~ 5kg) x 1 =50.341kg, x 2 =67.271kg, x 3 =45.503kg, x 4 =27.91kg, …  000 010 001 111

7  Dynamic range of R, B bits  Step size Δ = R/2 B  Quantization noise power = E[e 2 ]  Noise in dB (log 10 2=3.01) Effect of 1 additional is 6.02dB 8/30/2015 Media Lab. Kyung Hee University 7 1/Δ -Δ/2 Δ/2 e probability

8 Effect of quantization in image 8/30/2015 Media Lab. Kyung Hee University 8 PSNR InfPSNR 25dB DCTQ IDCT Q -1 IDCT

9 Media signal pdf and quantization error  The narrower pdf, the less number of bits at the same error  The narrower pdf, the less error at the same number of bits  pdf (probability density function)

10 Media signal Non-uniform pdf  Variable step size  Less error  Fixed step size  More error

11 Media signal Error for fixed step size  Representing values at all intervals are -0.75, -0.25, 0.25, 0.75, respectively, then mean square errors become,

12 Media signal Error for variable step size  What representing value minimizes mean square error in each interval? For example, in the interval 00, the following equation is differentiated by p to find minimum.

13  memory-less and memory I(x) = log 2 (1/p x ) = “degree of surprise”  qu-, re-, th-, -tion,  less uncertain Of course, there are exceptions... Qatar, Qantas  Conditional probability p(u|q) >> p(u) Then, I(u|q) << I(u) accordingly, I(n|tio) << I(n) 8/30/2015 Media Lab. Kyung Hee University 13 Correlation in text

14 Media signal Differential Pulse-Coded Modulation (DPCM)  Quantize not x [ n ] but d [ n ].  Principle : Pdf of d[n] is narrower than that of x [ n ]. Less error at the same number of bits. Less amount of data, at the same error. Prediction Quantize

15  Histograms in images simple image complex image Effects of DPCM 8/30/2015 Media Lab. Kyung Hee University 15 x[n] Prob. x[n] d[n] 0 0 Pred Q H(D 1 )<H(D 2 ) Prob.

16 Media signal Differential Pulse-Coded Modulation (DPCM) Prediction Quantize One - Tap Prediction N – Tap Prediction

17  Determine “a” which minimizes where R(1) is the auto-correlation for zero mean signal DPCM 8/30/2015 Media Lab. Kyung Hee University 17 time a ≈ 0 a > 0 a << 0

18 Media signal Adaptive DPCM  Prediction filter coefficients are estimated periodically and sent as side information. CDMA IS-95, CELP, EVRC (update interval 50 or 100 ms) LPC (linear predictive coding) Drawbacks 1. Correlation should be given and stationary. 2. Error propagation : needs refreshment  Open questions 1. Why is quantized difference used for prediction? 2. Will quantization noise be accumulated? 3. How often do we have to refresh? 4. How about non-stationary case?

19  Trade-off between bit-rate and quality [dynamic range, accuracy, pdf]  Narrower pdf is preferred, w.r.t. H(X)  Prediction for narrower pdf Widely used in audio-video codecs Adaptation for better prediction Error propagation Summary 8/30/2015 Media Lab. Kyung Hee University 19


Download ppt "Quantization and error Last updated on June 15, 2010 Doug Young Suh 8/30/2015."

Similar presentations


Ads by Google