Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work.

Similar presentations


Presentation on theme: "EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work."— Presentation transcript:

1 EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work for gray-scale images? –Image modeling revisited 4 Predictive coding of gray-scale images –1D Predictive coding: DPCM –2D fixed and adaptive prediction –Applications: Lossless JPEG and JPEG-LS

2 EE465: Introduction to Digital Image Processing Lossless Image Compression 4 No information loss – i.e., the decoded image is mathematically identical to the original image –For some sensitive data such as document or medical images, information loss is simply unbearable –For others such as photographic images, we only care about the subjective quality of decoded images (not the fidelity to the original)

3 EE465: Introduction to Digital Image Processing Data Compression Paradigm source modeling entropy coding discrete source X binary bit stream probability estimation P(Y) Y The art of data compression is the art of source modeling Probabilities can be estimated by counting relative frequencies either online or offline

4 EE465: Introduction to Digital Image Processing Recall: Run Length Coding Transformation by run-length counting Huffman coding discrete source X binary bit stream probability estimation P(Y) Y Y is the sequence of run-lengths from which X can be recovered losslessly

5 EE465: Introduction to Digital Image Processing Image Example 156 159 158 155 158 156 159 158 160 154 157 158 157 159 158 158 156 159 158 155 158 156 159 158 160 154 157 158 157 159 158 158 156 153 155 159 159 155 156 155 155 155 155 157 156 159 152 158 156 153 157 156 153 155 154 155 159 159 156 158 156 159 157 161 row col. Run max =4

6 EE465: Introduction to Digital Image Processing Why Short Runs?

7 EE465: Introduction to Digital Image Processing Why RLC bad for gray-scale images? 4 Gray-scale (also called “photographic”) images have hundreds of different gray levels 4 Since gray-scale images are acquired from the real world, noise contamination is inevitable You simply can not freely RUN in a gray-scale image

8 EE465: Introduction to Digital Image Processing Source Modeling Techniques 4 Prediction –Predict the future based on the causal past 4 Transformation –transform the source into an equivalent yet more convenient representation 4 Pattern matching –Identify and represent repeated patterns

9 EE465: Introduction to Digital Image Processing The Idea of Prediction 4 Remarkably simple: just follow the trend –Example I: X is a normal person’s temperature variation through day –Example II: X is intensity values of the first row of cameraman image 4 Markovian school (short memory) –Prediction does not count on the data a long time ago but the most recent ones (e.g., your temperature in the evening is more correlated to that in the afternoon than that in the morning)

10 EE465: Introduction to Digital Image Processing 1D Predictive Coding 1 st order Linear Prediction x 1 x 2 … … x n-1 x n x n+1 … … x N original samples - Encoding - Decoding y 1 =x 1 initialize prediction y n =x n -x n-1, n=2,…,N x 1 x 2 … … x N y 1 y 2 … … y N x 1 x 2 … … x N x 1 =y 1 initialize predictionx n =y n +x n-1, n=2,…,N y 1 y 2 … … y n-1 y n y n+1 … … y N prediction residues

11 EE465: Introduction to Digital Image Processing Numerical Example 90 original samples prediction residues decoded samples 929193 95 … 902202 90929193 95 … a b a-b a b a+b

12 EE465: Introduction to Digital Image Processing Image Example (take one row) H(X)=6.56bpp original row signal x (left) and its histogram (right)

13 EE465: Introduction to Digital Image Processing Source Entropy Revisited 4 How to calculate the “entropy” for a given sequence (or image)? –Obtain the histogram by relative frequency counting –Normalized the histogram to obtain probabilities P k =Prob(X=k),k=0-255 –Plug the probabilities into entropy formula You will be asked to implement it in the assignment

14 EE465: Introduction to Digital Image Processing Cautious Notes 4 The entropy value calculated in previous slide need to be understood as the result if we choose to model the image by an independent identically distributed (i.i.d.) random variable. 4 It does not take spatially correlated and varying characteristics into account –The true entropy is smaller!

15 EE465: Introduction to Digital Image Processing H(Y)=4.80bpp Image Example (con’t) prediction residue signal y (left) and its histogram (right)

16 EE465: Introduction to Digital Image Processing Interpretation 4 H(Y)<H(X) justifies the role of prediction (intuitively it decorrelates the signal). 4 Similarly, H(Y) is result if we choose to model the residue image by an independent identically distributed (i.i.d.) random variable. –It is an improved model when compared with X due to the prediction –The true entropy is smaller!

17 EE465: Introduction to Digital Image Processing High-order 1D Prediction Coding x 1 x 2 … … x n-1 x n x n+1 … … x N original samples - Encoding - Decoding y 1 =x 1,y 2 =x 2,…,y k =x k initialize prediction x 1 x 2 … … x N y 1 y 2 … … y N x 1 x 2 … … x N initialize prediction k-th order Linear Prediction x 1 =y 1,x 2 =y 2,…,x k =y k

18 EE465: Introduction to Digital Image Processing Why High-order? 4 By looking at more past samples, we can have a better prediction of the current one –Compare c_, ic_, dic_ and predic_ 4 It is a tradeoff between performance and complexity –The performance quickly diminishes as the order increases –Optimal order is often signal-dependent

19 EE465: Introduction to Digital Image Processing Linear Prediction entropy coding discrete source X binary bit stream probability estimation P(Y) Y 1D Predictive Coding Summary Prediction residue sequence Y usually contains less uncertainty (entropy) than the original sequence X

20 EE465: Introduction to Digital Image Processing From 1D to 2D X(n) causal past 1D 2D n future … …… raster-scanningZigzag-scanning

21 EE465: Introduction to Digital Image Processing 2D Predictive Coding raster scanning order X m,n causal half-plane

22 EE465: Introduction to Digital Image Processing X k : the k nearest causal neighbors of X m,n in terms of Euclidean distance prediction residue Ordering Causal Neighbors X m,n 1 23 4 5 6 where

23 EE465: Introduction to Digital Image Processing Lossless JPEG x w nnw #1234567#1234567 Predictor w n nw n+w-nw w-(n-nw)/2 n-(w-nw)/2 (n+w)/2 Notes horizontal vertical diagonal 3 rd -order 2 nd -order

24 EE465: Introduction to Digital Image Processing Numerical Examples 156 159 158 155 160 154 157 158 156 159 158 155 160 154 157 158 X= 156 3 -1 -3 160 -6 3 1 156 3 -1 -3 160 -6 3 1 Y= horizontal predictor X=[156 159 158 155] 1D 2D Y=[156 3 -1 -3] a b a-b Initialization: no prediction applied Note: 2D horizontal prediction can be viewed as the vector case of 1D prediction of each row

25 EE465: Introduction to Digital Image Processing Numerical Examples (Con’t) 156 159 158 155 4 -5 -1 3 -4 5 1 -3 4 -5 -1 3 Y= X= vertical predictor 156 159 158 155 160 154 157 158 156 159 158 155 160 154 157 158 2D vertical prediction can be viewed as the vector case of 1D prediction of each column Note: Q: Given a function of horizontal prediction, can you Use this function to implement vertical prediction? A: Apply horizontal prediction to the transpose of the image and then transpose the prediction residue again

26 EE465: Introduction to Digital Image Processing Image Examples H(Y)=5.05bppH(Y)=4.67bpp Q: why vertical predictor outperforms horizontal predictor? horizontal predictor vertical predictor Comparison of residue images generated by different predictors

27 EE465: Introduction to Digital Image Processing Analysis with a Simplified Edge Model 100 50 100 50 50 V_edge H_edge 50 100 50 50 100 H_predictor V_predictor Y=  50 Y=0 Conclusion: when the direction of predictor matches the direction of edges, prediction residues are small x w nnw

28 EE465: Introduction to Digital Image Processing Horizontal vs. Vertical Do we see more vertical edges than horizontal edges in natural images? Maybe yes, but why?

29 EE465: Introduction to Digital Image Processing Importance of Adaptation 4 Wouldn’t it be nice if we can switch the direction of predictor to locally match the edge direction? 4 The concept of adaptation was conceived several thousands ago in an ancient Chinese story of how to win a horse racing emperorgeneral 90 70 50 80 60 40 > > > How to win? good poor fair

30 EE465: Introduction to Digital Image Processing Median Edge Detection (MED) Prediction x w nnw Key: MED use the median operator to adaptively select one from three candidates (Predictors #1,#2,#4 in slide 44) as the predicted value.

31 EE465: Introduction to Digital Image Processing Another Way of Implementation x w nnw If else if Q: which one is faster? You need to find it out using MATLAB yourself

32 EE465: Introduction to Digital Image Processing Proof by Enumeration If nw>n>w, then n-nw<0 and w-nw<0, so n+w-nw<w<n and median(n,w,n+w-nw)=min(n,w)=w Case 1: nw>max(n,w) If nw>w>n, then n-nw<0 and w-nw<0, so n+w-nw<n<w and median(n,w,n+w-nw)=min(n,w)=n If nw and w-nw>0, so n+w-nw>w>n and median(n,w,n+w-nw)=max(n,w)=w Case 2: nw<min(n,w) If nw 0 and w-nw>0, so n+w-nw>n>w and median(n,w,n+w-nw)=max(n,w)=n Case 3: n<nw<w or w<nw<n n+w-nw also lies between n and w, so median(n,w,n+w-nw)=n+w-nw

33 EE465: Introduction to Digital Image Processing Numerical Examples 100 50 100 50 50 V_edge H_edge n=50,w=100, nw=100 n+w-nw=50 n=100,w=50, nw=100 n+w-nw=50 Note how we can get zero prediction residues regardless of the edge direction

34 EE465: Introduction to Digital Image Processing Image Example Fixed vertical predictor H=4.67bpp Adaptive (MED) predictor H=4.55bpp

35 EE465: Introduction to Digital Image Processing JPEG-LS (the new standard for lossless image compression)*

36 EE465: Introduction to Digital Image Processing Summary of Lossless Image Compression 4 Importance of modeling image source –Different classes of images need to be handled by different modeling techniques, e.g., RLC for binary/graphic and prediction for photographic 4 Importance of geometry –Images are two-dimensional signals –In 2D world, issues such as scanning order and orientation are critical to modeling


Download ppt "EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work."

Similar presentations


Ads by Google