Presentation is loading. Please wait.

Presentation is loading. Please wait.

ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission (10-24-06) Image Compression Quantization independent samples uniform and optimum correlated.

Similar presentations


Presentation on theme: "ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission (10-24-06) Image Compression Quantization independent samples uniform and optimum correlated."— Presentation transcript:

1 ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission (10-24-06) Image Compression Quantization independent samples uniform and optimum correlated samples vector quantization JPEG block based transform coding.... 10/24/06

2 ELE 488 F06 Image Compression Review –Sampling –Entropy coding –Runlength coding –Quantization Uniform step size, optimum (Lloyd Max) correlated samples JPEG – piecing together –Lossy encoding Transform coding – DCT –Lossless encoding sampler + quantizer encoder image capture

3 ELE 488 F06 Quantization Quantizer Jain, Fig 4.16 x ↔ u correction t k = ? r k = ?

4 ELE 488 F06 Optimum quantizer t k = ? r k = ? Optimum solution: …

5 ELE 488 F06 Numerical Solution of the LM Equations Solution by iteration procedure: a. Pick initial values for t (e.g. uniform grid) b. Find r values using (1) (May need numerical integration) c. Find new t values using (2) d. Go to line b until the t and r values converge

6 ELE 488 F06 Optimum Quantizers Gaussian, zero mean unit variance # bit 1 2 3 4 5 6 7 MSE.3634.1175.0345.0095.0025.0006.0002 SNR(dB) 4.40 9.30 14.92 20.22 26.01 31.90 37.86 Laplacian, zero mean unit variance # bit 1 2 3 4 5 6 7 MSE.5.1762.0545.0154.0041.0011.0003 SNR(dB) 3.01 7.54 12.64 18.13 23.87 29.74 35.69 Jain, Tables 4.1, 4.2

7 ELE 488 F06 Example: Triangle Start with uniform quantizer Use iterative algorithm. optimum thresholds (red) reconstruction values (blue) Difference: 1.1 dB

8 ELE 488 F06 Example: Gaussian Start with uniform quantizer Use iterative algorithm. optimum thresholds (red) and reconstruction values (blue) Difference: 3.4 dB

9 ELE 488 F06 Example: Triangle 2 Start with uniform quantizer Use iterative algorithm. optimum thresholds (red) and reconstruction values (blue) Difference: ? dB

10 ELE 488 F06 Example: Gaussian Mixture Start with uniform quantizer Use iterative algorithm. optimum thresholds (red) and reconstruction values (blue)

11 ELE 488 F06(Jain’s Fig.4.17) Piecewise Constant Approximation Approximation of Optimum Quantizer for Large L MSE~ Rao and Huang, Sec 3.2

12 ELE 488 F06 Gaussian or Laplacian good models for images?

13 ELE 488 F06

14 Differential Pulse Code Modulation (DPCM) Dynamic range for signal: (– A, A). For uniform B-bit quantizer, quantization step = A 2 –B+1 Peak signal to peak noise ratio = 2 B ~ 6B dB B should be large enough for good SNR. IF neighboring samples are well correlated, the difference has a smaller dynamic range. So requires small B for same SNR. Encoding successive differences  DPCM + - z -1 x[n]d[n] + a z -1 d Q [n] x Q [n] Q d[n] = x[n] – x[n–1] d Q [n] = x Q [n] – a x Q [n–1] a<1 for stability

15 ELE 488 F06 DPCM Encoder and Decoder special case of predictive coding x[n] + a z -1 d Q [n] x Q [n] a x Q [n - 1] + - a z - 1 d[n] Q + x Q [n] Encoder x(n) Predictor Quantizer _ d(n) d Q (n) + + x Q ^ (n) x Q (n) Decoder Prediction: x Q ^(n), Prediction error d(n) Quantized prediction error d Q (n) Distortion introduced: d(n) – d Q (n) x Q (n) Predictor + d Q (n) x Q ^ (n)

16 ELE 488 F06 More on Predictor Causality –need to received previous samples to decode current sample Can use previous p sampled to predict current sample –p th –order auto-regressive (AR) model –How to determine the coefficients to model x(n) For images Line-by-line DPCM predict from the past samples in the same line 2-D DPCM predict from past samples in the same line and from previous lines UMCP ENEE631 Slides (created by M.Wu © 2001)

17 ELE 488 F06 Comparison Jain’s book Table 4.1 and Fig 11.12 UMCP ENEE631 Slides (created by M.Wu © 2001) 10 dB? 6 dB/bit?

18 ELE 488 F06 Predictive (Differential) Coding (explanation of Fig 11.12) PCM: x(n) Gaussian, zero mean, variance σ 2 =1 Line by line DPCM: Gauss-Markov model x(n) = a x(n – 1) + e(n), e(n) iid Gaussian rv, E{x(n) x(n–1)} = a σ 2 Quantize and code d(n) = x(n) – a x(n–1) E{d 2 (n)} = E{ [x(n) – a x(n–1)] 2 } = 2(1 – a 2 ) σ 2, small if a~1. 2D DPCM: Gauss-Markov x(m,n) = a 1 x(m–1,n) + a 2 x(m,n–1) + a 3 (m–1,n–1) + e(m,n)

19 ELE 488 F06 JPEG Still Image Coding Putting pieces together Note: lossy, block based, transform coding

20 ELE 488 F06 JPEG Compression Standard JPEG - Joint Photographic Experts Group –Compression standard of continuous-tone still image –Became an international standard in 1992 Allow for lossy and lossless encodings of still images –Lossy compression – DCT-based average compression ratio 15:1 –Lossless compression – predictive-based Sequential, Progressive, Hierarchical modes –Sequential ~ encoded in a single left-to-right, top-to-bottom scan –Progressive ~ encoded in multiple scans to first produce a quick, rough decoded image when the transmission time is long –Hierarchical ~ encoded at multiple resolution to allow accessing low resolution without full decompression UMCP ENEE408G Slides (created by M.Wu & R.Liu © 2002)

21 ELE 488 F06 Transform Coding Use transform to pack energy to only a few coeff. How to allocate bits to each coeff.? –More bits for coeff. having large variance to –Also to incorporate perceptual importance From Jain’s Fig.11.15 UMCP ENEE408G Slides (created by M.Wu & R.Liu © 2002)

22 ELE 488 F06 Block-based Transform Coding Why transform? Why block based? –Captures local info. better than global transform –Computation complexity –Consider “N log N” transform and image of mxm blocks each with nxn pixels –(mn) 2 log (mn) 2 vs m 2 n 2 log n 2 –Dynamic range and bit allocation How to choose block size UMCP ENEE408G Slides (created by M.Wu & R.Liu © 2002) complexity

23 ELE 488 F06 Block-based Transform Coding Encoder –Step-1 Divide an image into m x m blocks and take DCT –Step-2 Design quantizer and quantize coefficients (lossy!) –Step-3 Determine bit-allocation for transform coefficients –Step-4 Encode quantized coefficients Decoder – reverse the steps From Wallace’s JPEG tutorial (1993) UMCP ENEE408G Slides (created by M.Wu & R.Liu © 2002)

24 ELE 488 F06 Discrete Cosine Transform – DCT f(i,j)  F(u,v)

25 ELE 488 F06 flower

26 ELE 488 F06 DCT of Flower

27 ELE 488 F06 Histogram of DCT

28 ELE 488 F06 Blockwise DCT

29 ELE 488 F06 Blockwise DCT, enlarged

30 ELE 488 F06 Blockwise DCT with DC values removed

31 ELE 488 F06 Blockwise DCT with DC values removed - enlarged

32 ELE 488 F06 Image as pixels and in transform (DCT) domain Blockwise DCT reflects better local information than DCT of entire image DC values special Most information rests at low frequencies Many high frequency coefficients are small or zero Use these observations for efficient encoding of images in JPEG

33 ELE 488 F06 JPEG Still Image Coding Putting pieces together Note: lossy, block based, transform coding

34 ELE 488 F06 cosine functions

35 ELE 488 F06 DCT and Zig-Zag Ordering (from low frequency to high)

36 ELE 488 F06 “Standard” Quantization Table (step size) ---------------------------------- 16 11 10 16 24 40 51 61 12 12 14 19 26 58 60 55 14 13 16 24 40 57 69 56 14 17 22 29 51 87 80 62 18 22 37 56 68 109 103 77 24 35 55 64 81 104 113 92 49 64 78 87 103 121 120 101 72 92 95 98 112 100 103 99 ---------------------------------- Often use integer multiple of step size Quantization controls number of bits (compression ratio) and quality Larger step leads to poorer quality, but higher compression


Download ppt "ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission (10-24-06) Image Compression Quantization independent samples uniform and optimum correlated."

Similar presentations


Ads by Google