Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance.

Slides:



Advertisements
Similar presentations
15-583:Algorithms in the Real World
Advertisements

Data Compression CS 147 Minh Nguyen.
IMPROVING THE PERFORMANCE OF JPEG-LS Michael Syme Supervisor: Dr. Peter Tischer.
Sampling and Pulse Code Modulation
Pattern Recognition and Machine Learning
DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University.
Image Compression, Transform Coding & the Haar Transform 4c8 – Dr. David Corrigan.
1 Pixel Interpolation By: Mieng Phu Supervisor: Peter Tischer.
Bayesian Learning Rong Jin. Outline MAP learning vs. ML learning Minimum description length principle Bayes optimal classifier Bagging.
Reji Mathew and David S. Taubman CSVT  Introduction  Quad-tree representation  Quad-tree motion modeling  Motion vector prediction strategies.
CSc 461/561 CSc 461/561 Multimedia Systems Part B: 1. Lossless Compression.
Spatial and Temporal Data Mining
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007.
Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University.
Lecture 4 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
IMPROVING THE PERFORMANCE OF JPEG-LS Michael Syme Supervisor: Dr. Peter Tischer.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
Distributed Video Coding Bernd Girod, Anne Margot Aaron, Shantanu Rane, and David Rebollo-Monedero IEEE Proceedings 2005.
SWE 423: Multimedia Systems Chapter 7: Data Compression (4)
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
Software Research Image Compression Mohamed N. Ahmed, Ph.D.
Speech coding. What’s the need for speech coding ? Necessary in order to represent human speech in a digital form Applications: mobile/telephone communication,
Context-Based Adaptive Entropy Coding Xiaolin Wu McMaster University Hamilton, Ontario, Canada.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Multiclass object recognition
Lecture 1 Contemporary issues in IT Lecture 1 Monday Lecture 10:00 – 12:00, Room 3.27 Lab 13:00 – 15:00, Lab 6.12 and 6.20 Lecturer: Dr Abir Hussain Room.
15-853Page :Algorithms in the Real World Data Compression II Arithmetic Coding – Integer implementation Applications of Probability Coding – Run.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2011.
MML Inference of RBFs Enes Makalic Lloyd Allison Andrew Paplinski.
JPEG. The JPEG Standard JPEG is an image compression standard which was accepted as an international standard in  Developed by the Joint Photographic.
CIS679: Multimedia Basics r Multimedia data type r Basic compression techniques.
JPEG CIS 658 Fall 2005.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 5.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
The Group Lasso for Logistic Regression Lukas Meier, Sara van de Geer and Peter Bühlmann Presenter: Lu Ren ECE Dept., Duke University Sept. 19, 2008.
Image Compression – Fundamentals and Lossless Compression Techniques
Compression video overview 演講者:林崇元. Outline Introduction Fundamentals of video compression Picture type Signal quality measure Video encoder and decoder.
Erasure Coding for Real-Time Streaming Derek Leong and Tracey Ho California Institute of Technology Pasadena, California, USA ISIT
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Error control coding – binary linear codes Background material for linear error control codes.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Quantization independent samples uniform and optimum correlated.
JPEG.
Page 11/28/2016 CSE 40373/60373: Multimedia Systems Quantization  F(u, v) represents a DCT coefficient, Q(u, v) is a “quantization matrix” entry, and.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2012.
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
JPEG Compression What is JPEG? Motivation
Chapter 7. Classification and Prediction
IMAGE COMPRESSION.
CSI-447: Multimedia Systems
Data Compression.
Predictive Coding.
Discrete Cosine Transform
Data Compression.
Huffman Coding, Arithmetic Coding, and JBIG2
Data Compression CS 147 Minh Nguyen.
Context-based Data Compression
JPEG-LS -- The new standard of lossless image compression
Image Coding and Compression
Model generalization Brief summary of methods
Source: IEEE Transactions on Circuits and Systems,
Presentation transcript:

Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance are unclear  No intermediate results given, only final, optimal values

Image Compression Compression = Modelling + Coding  Model – Assign probabilities to symbols  Coding – Transmit symbols: Code Length = -log 2 (P) Pixels encoded in raster order  Conventionally reading order Left-to-right, Top-to-bottom  Previously encoded pixels known to both encoder and decoder

Prediction Causal neighbours can be used to predict the value of pixel  Standard predictor is linear combination of neighbour values Current Pixel Causal Neighbours

Prediction CALIC, JPEG LS use DPCM  Prediction error is coded instead of value  Errors tend to follow Laplacian distribution  Have lower entropy than raw pixel values Sample Error Distribution μ

TMW World’s best lossless image compression program  Blending of prediction error distributions  Use of information from local region  Use of large number of causal neighbours  Optimisation of parameters with respect to message length We examine the first three points

Blending Test predictors on local window of pixels  Calculate what the error would be for causal neighbours  Spread of errors gives a measure of how effective the predictor has been in the region Current Pixel Local Window

Blending Generate Laplacian distributions  Mean is predicted value plus mean error in window  Variance calculated from spread of errors in window Blend probability distributions to give final probability distribution 0.7 * * =

Window Size Window size involves trade off  Larger window size means using more pixels from current segments  Also increases likelihood of using pixels from neighbouring segments

Window Size 2 windows for calculating:  Blending weights  Mean/spread of distribution 36 Pixels 4.05 bpp Blending Window Size 78 Pixels 4.05 bpp Distribution Window Size

Locally Trained Predictor Blending scheme rewards predictors with low error norm in local window Find predictor to minimise this norm  Local window contains N pixels and N corresponding causal neighbour vectors  Perform Multiple Linear Regression to find optimal coefficients  Can minimise |e| 2 or |e|

Neighbourhood Size Noise plays large part in code length  Consider p = p’ + N(μ, σ) More neighbours tends to reduce impact of noise  More neighbours means more noise terms in predicted value  Noise terms have smaller coefficients, and tend to cancel out more

Future Work Global Optimisation  Full investigation of benefits yet to take place Texture modelling Segmentation  Transmit predictors and segment maps before pixel values  Send the most important information first!

Conclusions Blending PDFs produces slight benefit Local information in prediction much more valuable than global information Global optimisation and large neighbourhood sizes also important to TMW’s success