EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work.

Slides:



Advertisements
Similar presentations
Image Compression. Data and information Data is not the same thing as information. Data is the means with which information is expressed. The amount of.
Advertisements

Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Image (and Video) Coding and Processing Lecture 5: Point Operations Wade Trappe.
5. 1 Model of Image degradation and restoration
SWE 423: Multimedia Systems
Spatial and Temporal Data Mining
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
CMPT 365 Multimedia Systems
IMPROVING THE PERFORMANCE OF JPEG-LS Michael Syme Supervisor: Dr. Peter Tischer.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
Lossless Compression in Multimedia Data Representation Hao Jiang Computer Science Department Sept. 20, 2007.
Spring 2015 Mathematics in Management Science Binary Linear Codes Two Examples.
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
Software Research Image Compression Mohamed N. Ahmed, Ph.D.
EE465: Introduction to Digital Image Processing Binary Image Compression 4 The art of modeling image source –Image pixels are NOT independent events 4.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 5 : IMAGE COMPRESSION – LOSSLESS COMPRESSION - Nur Hidayah Bte Jusoh (IT 01481) 2)Azmah Bte Abdullah Sani (IT 01494) 3)Dina Meliwana.
15-853Page :Algorithms in the Real World Data Compression II Arithmetic Coding – Integer implementation Applications of Probability Coding – Run.
1 Image Compression. 2 GIF: Graphics Interchange Format Basic mode Dynamic mode A LZW method.
Lab #5-6 Follow-Up: More Python; Images Images ● A signal (e.g. sound, temperature infrared sensor reading) is a single (one- dimensional) quantity that.
296.3Page 1 CPS 296.3:Algorithms in the Real World Data Compression: Lecture 2.5.
Reversible Color Image Watermarking in YCoCg-R Color Space Aniket Roy under the supervision of Dr. Rajat Subhra Chakraborty.
EE465: Introduction to Digital Image Processing 1 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary.
EE465: Introduction to Digital Image Processing1 Data Compression Techniques Text: WinZIP, WinRAR (Lempel-Ziv compression’1977) Image: JPEG (DCT-based),
(Important to algorithm analysis )
CIS679: Multimedia Basics r Multimedia data type r Basic compression techniques.
Image Compression Supervised By: Mr.Nael Alian Student: Anwaar Ahmed Abu-AlQomboz ID: IT College “Multimedia”
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Application (I): Impulse Noise Removal Impulse noise.
Addressing Image Compression Techniques on current Internet Technologies By: Eduardo J. Moreira & Onyeka Ezenwoye CIS-6931 Term Paper.
Digital Image Processing Image Compression
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Image Compression – Fundamentals and Lossless Compression Techniques
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Why do we Need Statistical Model in the first place? Any image processing algorithm has to work on a collection (class) of images instead of a single one.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Introduction to Digital Signals
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
Digital Image Processing Lecture 22: Image Compression
JPEG (Joint Photographic Expert Group)
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Quantization independent samples uniform and optimum correlated.
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
By Dr. Hadi AL Saadi Lossy Compression. Source coding is based on changing of the original image content. Also called semantic-based coding High compression.
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
JPEG Compression What is JPEG? Motivation
EE465: Introduction to Digital Image Processing
Digital Image Processing Lecture 20: Image Compression May 16, 2005
IMAGE COMPRESSION.
Last update on June 15, 2010 Doug Young Suh
Discrete Cosine Transform
JPEG.
Data Compression.
Image Compression 9/20/2018 Image Compression.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
UNIT IV.
JPEG Still Image Data Compression Standard
Image Coding and Compression
Magnetic Resonance Imaging
Chapter 8 – Compression Aims: Outline the objectives of compression.
Source: IEEE Transactions on Circuits and Systems,
Presentation transcript:

EE465: Introduction to Digital Image Processing Lossless Image Compression 4 Recall: run length coding of binary and graphic images –Why does it not work for gray-scale images? –Image modeling revisited 4 Predictive coding of gray-scale images –1D Predictive coding: DPCM –2D fixed and adaptive prediction –Applications: Lossless JPEG and JPEG-LS

EE465: Introduction to Digital Image Processing Lossless Image Compression 4 No information loss – i.e., the decoded image is mathematically identical to the original image –For some sensitive data such as document or medical images, information loss is simply unbearable –For others such as photographic images, we only care about the subjective quality of decoded images (not the fidelity to the original)

EE465: Introduction to Digital Image Processing Data Compression Paradigm source modeling entropy coding discrete source X binary bit stream probability estimation P(Y) Y The art of data compression is the art of source modeling Probabilities can be estimated by counting relative frequencies either online or offline

EE465: Introduction to Digital Image Processing Recall: Run Length Coding Transformation by run-length counting Huffman coding discrete source X binary bit stream probability estimation P(Y) Y Y is the sequence of run-lengths from which X can be recovered losslessly

EE465: Introduction to Digital Image Processing Image Example row col. Run max =4

EE465: Introduction to Digital Image Processing Why Short Runs?

EE465: Introduction to Digital Image Processing Why RLC bad for gray-scale images? 4 Gray-scale (also called “photographic”) images have hundreds of different gray levels 4 Since gray-scale images are acquired from the real world, noise contamination is inevitable You simply can not freely RUN in a gray-scale image

EE465: Introduction to Digital Image Processing Source Modeling Techniques 4 Prediction –Predict the future based on the causal past 4 Transformation –transform the source into an equivalent yet more convenient representation 4 Pattern matching –Identify and represent repeated patterns

EE465: Introduction to Digital Image Processing The Idea of Prediction 4 Remarkably simple: just follow the trend –Example I: X is a normal person’s temperature variation through day –Example II: X is intensity values of the first row of cameraman image 4 Markovian school (short memory) –Prediction does not count on the data a long time ago but the most recent ones (e.g., your temperature in the evening is more correlated to that in the afternoon than that in the morning)

EE465: Introduction to Digital Image Processing 1D Predictive Coding 1 st order Linear Prediction x 1 x 2 … … x n-1 x n x n+1 … … x N original samples - Encoding - Decoding y 1 =x 1 initialize prediction y n =x n -x n-1, n=2,…,N x 1 x 2 … … x N y 1 y 2 … … y N x 1 x 2 … … x N x 1 =y 1 initialize predictionx n =y n +x n-1, n=2,…,N y 1 y 2 … … y n-1 y n y n+1 … … y N prediction residues

EE465: Introduction to Digital Image Processing Numerical Example 90 original samples prediction residues decoded samples … … a b a-b a b a+b

EE465: Introduction to Digital Image Processing Image Example (take one row) H(X)=6.56bpp original row signal x (left) and its histogram (right)

EE465: Introduction to Digital Image Processing Source Entropy Revisited 4 How to calculate the “entropy” for a given sequence (or image)? –Obtain the histogram by relative frequency counting –Normalized the histogram to obtain probabilities P k =Prob(X=k),k=0-255 –Plug the probabilities into entropy formula You will be asked to implement it in the assignment

EE465: Introduction to Digital Image Processing Cautious Notes 4 The entropy value calculated in previous slide need to be understood as the result if we choose to model the image by an independent identically distributed (i.i.d.) random variable. 4 It does not take spatially correlated and varying characteristics into account –The true entropy is smaller!

EE465: Introduction to Digital Image Processing H(Y)=4.80bpp Image Example (con’t) prediction residue signal y (left) and its histogram (right)

EE465: Introduction to Digital Image Processing Interpretation 4 H(Y)<H(X) justifies the role of prediction (intuitively it decorrelates the signal). 4 Similarly, H(Y) is result if we choose to model the residue image by an independent identically distributed (i.i.d.) random variable. –It is an improved model when compared with X due to the prediction –The true entropy is smaller!

EE465: Introduction to Digital Image Processing High-order 1D Prediction Coding x 1 x 2 … … x n-1 x n x n+1 … … x N original samples - Encoding - Decoding y 1 =x 1,y 2 =x 2,…,y k =x k initialize prediction x 1 x 2 … … x N y 1 y 2 … … y N x 1 x 2 … … x N initialize prediction k-th order Linear Prediction x 1 =y 1,x 2 =y 2,…,x k =y k

EE465: Introduction to Digital Image Processing Why High-order? 4 By looking at more past samples, we can have a better prediction of the current one –Compare c_, ic_, dic_ and predic_ 4 It is a tradeoff between performance and complexity –The performance quickly diminishes as the order increases –Optimal order is often signal-dependent

EE465: Introduction to Digital Image Processing Linear Prediction entropy coding discrete source X binary bit stream probability estimation P(Y) Y 1D Predictive Coding Summary Prediction residue sequence Y usually contains less uncertainty (entropy) than the original sequence X

EE465: Introduction to Digital Image Processing From 1D to 2D X(n) causal past 1D 2D n future … …… raster-scanningZigzag-scanning

EE465: Introduction to Digital Image Processing 2D Predictive Coding raster scanning order X m,n causal half-plane

EE465: Introduction to Digital Image Processing X k : the k nearest causal neighbors of X m,n in terms of Euclidean distance prediction residue Ordering Causal Neighbors X m,n where

EE465: Introduction to Digital Image Processing Lossless JPEG x w nnw # # Predictor w n nw n+w-nw w-(n-nw)/2 n-(w-nw)/2 (n+w)/2 Notes horizontal vertical diagonal 3 rd -order 2 nd -order

EE465: Introduction to Digital Image Processing Numerical Examples X= Y= horizontal predictor X=[ ] 1D 2D Y=[ ] a b a-b Initialization: no prediction applied Note: 2D horizontal prediction can be viewed as the vector case of 1D prediction of each row

EE465: Introduction to Digital Image Processing Numerical Examples (Con’t) Y= X= vertical predictor D vertical prediction can be viewed as the vector case of 1D prediction of each column Note: Q: Given a function of horizontal prediction, can you Use this function to implement vertical prediction? A: Apply horizontal prediction to the transpose of the image and then transpose the prediction residue again

EE465: Introduction to Digital Image Processing Image Examples H(Y)=5.05bppH(Y)=4.67bpp Q: why vertical predictor outperforms horizontal predictor? horizontal predictor vertical predictor Comparison of residue images generated by different predictors

EE465: Introduction to Digital Image Processing Analysis with a Simplified Edge Model V_edge H_edge H_predictor V_predictor Y=  50 Y=0 Conclusion: when the direction of predictor matches the direction of edges, prediction residues are small x w nnw

EE465: Introduction to Digital Image Processing Horizontal vs. Vertical Do we see more vertical edges than horizontal edges in natural images? Maybe yes, but why?

EE465: Introduction to Digital Image Processing Importance of Adaptation 4 Wouldn’t it be nice if we can switch the direction of predictor to locally match the edge direction? 4 The concept of adaptation was conceived several thousands ago in an ancient Chinese story of how to win a horse racing emperorgeneral > > > How to win? good poor fair

EE465: Introduction to Digital Image Processing Median Edge Detection (MED) Prediction x w nnw Key: MED use the median operator to adaptively select one from three candidates (Predictors #1,#2,#4 in slide 44) as the predicted value.

EE465: Introduction to Digital Image Processing Another Way of Implementation x w nnw If else if Q: which one is faster? You need to find it out using MATLAB yourself

EE465: Introduction to Digital Image Processing Proof by Enumeration If nw>n>w, then n-nw<0 and w-nw<0, so n+w-nw<w<n and median(n,w,n+w-nw)=min(n,w)=w Case 1: nw>max(n,w) If nw>w>n, then n-nw<0 and w-nw<0, so n+w-nw<n<w and median(n,w,n+w-nw)=min(n,w)=n If nw and w-nw>0, so n+w-nw>w>n and median(n,w,n+w-nw)=max(n,w)=w Case 2: nw<min(n,w) If nw 0 and w-nw>0, so n+w-nw>n>w and median(n,w,n+w-nw)=max(n,w)=n Case 3: n<nw<w or w<nw<n n+w-nw also lies between n and w, so median(n,w,n+w-nw)=n+w-nw

EE465: Introduction to Digital Image Processing Numerical Examples V_edge H_edge n=50,w=100, nw=100 n+w-nw=50 n=100,w=50, nw=100 n+w-nw=50 Note how we can get zero prediction residues regardless of the edge direction

EE465: Introduction to Digital Image Processing Image Example Fixed vertical predictor H=4.67bpp Adaptive (MED) predictor H=4.55bpp

EE465: Introduction to Digital Image Processing JPEG-LS (the new standard for lossless image compression)*

EE465: Introduction to Digital Image Processing Summary of Lossless Image Compression 4 Importance of modeling image source –Different classes of images need to be handled by different modeling techniques, e.g., RLC for binary/graphic and prediction for photographic 4 Importance of geometry –Images are two-dimensional signals –In 2D world, issues such as scanning order and orientation are critical to modeling