Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance.

Similar presentations


Presentation on theme: "Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance."— Presentation transcript:

1 Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance are unclear  No intermediate results given, only final, optimal values

2 Image Compression Compression = Modelling + Coding  Model – Assign probabilities to symbols  Coding – Transmit symbols: Code Length = -log 2 (P) Pixels encoded in raster order  Conventionally reading order Left-to-right, Top-to-bottom  Previously encoded pixels known to both encoder and decoder

3 Prediction Causal neighbours can be used to predict the value of pixel  Standard predictor is linear combination of neighbour values Current Pixel Causal Neighbours

4 Prediction CALIC, JPEG LS use DPCM  Prediction error is coded instead of value  Errors tend to follow Laplacian distribution  Have lower entropy than raw pixel values Sample Error Distribution μ

5 TMW World’s best lossless image compression program  Blending of prediction error distributions  Use of information from local region  Use of large number of causal neighbours  Optimisation of parameters with respect to message length We examine the first three points

6 Blending Test predictors on local window of pixels  Calculate what the error would be for causal neighbours  Spread of errors gives a measure of how effective the predictor has been in the region Current Pixel Local Window

7 Blending Generate Laplacian distributions  Mean is predicted value plus mean error in window  Variance calculated from spread of errors in window Blend probability distributions to give final probability distribution 0.7 * + 0.3 * =

8 Window Size Window size involves trade off  Larger window size means using more pixels from current segments  Also increases likelihood of using pixels from neighbouring segments

9 Window Size 2 windows for calculating:  Blending weights  Mean/spread of distribution 36 Pixels 4.05 bpp Blending Window Size 78 Pixels 4.05 bpp Distribution Window Size

10 Locally Trained Predictor Blending scheme rewards predictors with low error norm in local window Find predictor to minimise this norm  Local window contains N pixels and N corresponding causal neighbour vectors  Perform Multiple Linear Regression to find optimal coefficients  Can minimise |e| 2 or |e|

11 Neighbourhood Size Noise plays large part in code length  Consider p = p’ + N(μ, σ) More neighbours tends to reduce impact of noise  More neighbours means more noise terms in predicted value  Noise terms have smaller coefficients, and tend to cancel out more

12 Future Work Global Optimisation  Full investigation of benefits yet to take place Texture modelling Segmentation  Transmit predictors and segment maps before pixel values  Send the most important information first!

13 Conclusions Blending PDFs produces slight benefit Local information in prediction much more valuable than global information Global optimisation and large neighbourhood sizes also important to TMW’s success


Download ppt "Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance."

Similar presentations


Ads by Google