Image Restoration AND Compression

Slides:



Advertisements
Similar presentations
T.Sharon-A.Frank 1 Multimedia Compression Basics.
Advertisements

Image Compression. Data and information Data is not the same thing as information. Data is the means with which information is expressed. The amount of.
Chapter 3 Image Enhancement in the Spatial Domain.
ECE 472/572 - Digital Image Processing Lecture 7 - Image Restoration - Noise Models 10/04/11.
Digital Image Processing Chapter 5: Image Restoration.
Spatial and Temporal Data Mining
Digital Image Processing Chapter 5: Image Restoration.
Image Compression (Chapter 8)
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
DIGITAL IMAGE PROCESSING Instructors: Dr J. Shanbehzadeh M.Gholizadeh M.Gholizadeh
2007Theo Schouten1 Compression "lossless" : f[x,y]  { g[x,y] = Decompress ( Compress ( f[x,y] ) | “lossy” : quality measures e 2 rms = 1/MN  ( g[x,y]
Chapter 5 Image Restoration. Preview Goal: improve an image in some predefined sense. Image enhancement: subjective process Image restoration: objective.
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
Software Research Image Compression Mohamed N. Ahmed, Ph.D.
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Digital Image Processing Lecture 4 Image Restoration and Reconstruction Second Semester Azad University Islamshar Branch
CSE & CSE Multimedia Processing Lecture 7
Lecture 1 Contemporary issues in IT Lecture 1 Monday Lecture 10:00 – 12:00, Room 3.27 Lab 13:00 – 15:00, Lab 6.12 and 6.20 Lecturer: Dr Abir Hussain Room.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
ECE472/572 - Lecture 12 Image Compression – Lossy Compression Techniques 11/10/11.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Restoration.
Computer Vision - Restoration Hanyang University Jong-Il Park.
DIGITAL IMAGE PROCESSING Instructors: Dr J. Shanbehzadeh M.Gholizadeh M.Gholizadeh
Prof. Amr Goneid Department of Computer Science & Engineering
Digital Image Processing
Indiana University Purdue University Fort Wayne Hongli Luo
CIS679: Multimedia Basics r Multimedia data type r Basic compression techniques.
Image Compression Supervised By: Mr.Nael Alian Student: Anwaar Ahmed Abu-AlQomboz ID: IT College “Multimedia”
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Digital Image Processing Image Compression
Image Compression – Fundamentals and Lossless Compression Techniques
Outline Kinds of Coding Need for Compression Basic Types Taxonomy Performance Metrics.
Image Compression Fasih ur Rehman. Goal of Compression Reduce the amount of data required to represent a given quantity of information Reduce relative.
Image Restoration.
Digital Image Processing Lecture : Image Restoration
Chapter 17 Image Compression 17.1 Introduction Redundant and irrelevant information  “Your wife, Helen, will meet you at Logan Airport in Boston.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Image Restoration Fasih ur Rehman. –Goal of restoration: improve image quality –Is an objective process compared to image enhancement –Restoration attempts.
8-1 Chapter 8: Image Restoration Image enhancement: Overlook degradation processes, deal with images intuitively Image restoration: Known degradation processes;
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
Digital Image Processing Lecture 22: Image Compression
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Chapter 5 Image Restoration.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
Image Restoration. Image restoration vs. image enhancement Enhancement:  largely a subjective process  Priori knowledge about the degradation is not.
IS502:M ULTIMEDIA D ESIGN FOR I NFORMATION S YSTEM M ULTIMEDIA OF D ATA C OMPRESSION Presenter Name: Mahmood A.Moneim Supervised By: Prof. Hesham A.Hefny.
IT472 Digital Image Processing
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
6/10/20161 Digital Image Processing Lecture 09: Image Restoration-I Naveed Ejaz.
Lecture 10 Chapter 5: Image Restoration. Image restoration Image restoration is the process of recovering the original scene from the observed scene which.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
Image Compression (Chapter 8)
JPEG Compression What is JPEG? Motivation
IMAGE PROCESSING IMAGE COMPRESSION
IMAGE COMPRESSION.
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant.
Digital 2D Image Basic Masaki Hayashi
Data Compression.
CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models
Context-based Data Compression
Image Compression 9/20/2018 Image Compression.
Image Analysis Image Restoration.
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Image Restoration - Focus on Noise
Image Transforms for Robust Coding
Presentation transcript:

Image Restoration AND Compression

Degraded Images

What is Image Restoration The purpose of image restoration is to restore a degraded/distorted image to its original content and quality. Distinctions to Image Enhancement Image restoration assumes a degradation model that is known or can be estimated. Original content and quality ≠ Good looking

Image enhancement vs. Image Restoration Image restoration assumes a degradation model that is known or can be estimated. •Original content and quality does not mean: Good-looking or appearance. •Image Enhancement is subjective, whereas image restoration is objective process. •Image restoration try to recover original image from degraded with prior knowledge of degradation process. •Restoration involves modeling of degradation and applying the inverse process in order to recover the original image. •Although the restore image is not the original image, its approximation of actual image.

Degradation Model Objective: To restore a degraded/distorted image to its original content and quality. Spatial Domain: g(x,y)=h(x,y)*f(x,y)+ŋ(x,y) Frequency Domain: G(u,v)=H(u,v)F(u,v)+ ŋ(u,v) Matrix: G=HF+ŋ

Noise Models Most types of noise are modeled as known probability density functions Noise model is decided based on understanding of the physics of the sources of noise. Gaussian: poor illumination Rayleigh: range image Gamma, exp: laser imaging Impulse: faulty switch during imaging, Uniform is least used. Parameters can be estimated based on histogram on small flat area of an image

Gaussian Rayleigh Gamma

Exponential Uniform Salt & Pepper

Noise Removal Restoration Method Mean filters Arithmetic mean filter Geometric mean filter Harmonic mean filter Contra-harmonic mean filter Order statistics filters Median filter Max and min filters Mid-point filter alpha-trimmed filters Adaptive filters Adaptive local noise reduction filter Adaptive median filter

Inverse Filter Recall the degradation model: Given H(u,v), one may directly estimate the original image by At (u,v) where H(u,v)  0, the noise N(u,v) term will be amplified! Invfildemo.m

Wiener Filtering Minimum mean-square error filter Assume f and  are both 2D random sequences, uncorrelated to each other. Goal: to minimize Solution: Frequency selective scaling of inverse filter solution! White noise, unknown Sf(u,v):

Constrained Least Square (CLS) Filter For each pixel, assume the noise  has a Gaussian distribution. This leads to a likelihood function: A constraint representing prior distribution of f will be imposed: the exponential form of pdf of f is known as the Gibbs’ distribution. Since L(f)  p(g|f), use Bayes rule, since g is given, to maximize the posterior probability, one should minimize q is an operator based on prior knowledge about f. For example, it may be the Laplacian operator!

Intuitive Interpretation of CLS Prior knowledge: Most images are smooth  ||q**f|| should be minimized However, the restored image , after going through the same degradation process h, should be close to the given degraded image g. The difference between g and is bounded by the amount of the additive noise: In practice, |||| is unknown and needs to be estimated with the variance of the noise

Solution and Iterative Algorithm Iterative algorithm (Hunt) 1. Set initial value of , 2. Find , and compute R(u,v). 3. If ||R||2 - ||N||2 < - a, set  = BL, increase , else if ||R||2 - ||N||2 > a, set  = Bu, decrease  , else stop iteration. 4. new = (Bu+BL)/2, go to step 2. To minimize CCLS, Set CCLS/ F = 0. This yields The value of  however, has to be determined iteratively! It should be chosen such that

CLS Demonstration

Image Compression

Image Compression Everyday an enormous amount of information is stored, processed, and transmitted Financial data Reports Inventory Cable TV Online Ordering and tracking

Image Compression Because much of this information is graphical or pictorial in nature, the storage and communications requirements are immense. Image compression addresses the problem of reducing the amount of data requirements to represent a digital image. Image Compression is becoming an enabling technology: HDTV. Also it plays an important role in Video Conferencing, remote sensing, satellite TV, FAX, document and medical imaging.

Image Compression We want to remove redundancy from the data Mathematically Transformation 2D array Of pixels Statistically Uncorrelated data

Image Compression Outline: Fundamentals Error-Free Compression Coding Redundancy Interpixel Redundancy Psychovisual Redundancy Fidelity Criteria Error-Free Compression Variable-length Coding LZW Coding Predictive Coding Lossy Compression Transform Coding Wavelet Coding Image Compression Standards

Fundamentals The term data compression refers to the process of reducing the amount of data required to represent a given quantity of information Data Information Various amount of data can be used to represent the same information Data might contain elements that provide no relevant information : data redundancy Data redundancy is a central issue in image compression. It is not an abstract concept but mathematically quantifiable entity

Data Redundancy Let n1 and n2 denote the number of information carrying units in two data sets that represent the same information The relative redundancy RD is define as : where CR, commonly called the compression ratio, is

Data Redundancy If n1 = n2 , CR=1 and RD=0 no redundancy If n1 >> n2 , CR and RD high redundancy If n1 << n2 , CR and RD undesirable A compression ration of 10 (10:1) means that the first data set has 10 information carrying units (say, bits) for every 1 unit in the second (compressed) data set. The corresponding redundancy of 0.9 says that 90% (RD=0.9) of the data in the first data set is redundant. In Image compression , 3 basic redundancy can be identified Coding Redundancy Interpixel Redundancy Psychovisual Redundancy

Coding Redundancy Recall from the histogram calculations where p(rk) is the probability of a pixel to have a certain value rk If the number of bits used to represent rk is l(rk), then

Coding Redundancy Example:

Coding Redundancy Variable-Length Coding

Inter-pixel Redundancy Here the two pictures have Approximately the same Histogram. We must exploit Pixel Dependencies. Each pixel can be estimated From its neighbors.

Interpixel redundancy The intensity at a pixel may correlate strongly with the intensity value of its neighbors. Because the value of any given pixel can be reasonably predicted from the value of its neighbors Much of the visual contribution of a single pixel to an image is redundant; it could have been guessed on the bases of its neighbors values. We can remove redundancy by representing changes in intensity rather than absolute intensity values .For example , the differences between adjacent pixels can be used to represent an image . Transformation of this type are referred to as mappings. They are called reversible if the original image elements can be reconstructed from the transformed data set. For example the sequence (50,50, ..50) becomes (50, 4).

Run-Length Encoding Example of Inter-pixel Redundancy removal

Psycho-visual Redundancy The human visual system is more sensitive to edges Middle Picture: Uniform quantization from 256 to 16 gray levels CR= 2 Right picture: Improved gray level quantization (IGS)

Fidelity Criteria Removal of irrelevant visual information involves a loss of real or quantitative image information. Since information is lost, a means of quantifying the nature of loss is needed. Objective fidelity criteria and Subjective fidelity criteria Objective fidelity criteria: When information loss can be expressed as a mathematical function of input & output of a compression process. E.g. RMS error between 2 images.

Fidelity Criteria Subjective fidelity criteria: A Decompressed image is presented to a cross section of viewers and averaging their evaluations. It can be done by using an absolute rating scale Or By means of side by side comparisons of f(x, y) & f’(x, y). Side by Side comparison can be done with a scale such as {-3, -2, -1, 0, 1, 2, 3} to represent the subjective valuations {much worse, worse, slightly worse, the same, slightly better, better, much better} respectively.

Fidelity Criteria: Objective fidelity criteria Example (RMS) The error between two functions is given by: So, the total error between the two images is The root-mean-square error averaged over the whole image is

Fidelity Criteria A closely related objective fidelity criterion is the mean square signal to noise ratio of the compressed- decompressed image

Fidelity Criteria

Compression Model The source encoder is responsible for removing redundancy (coding, inter-pixel, psycho-visual) The channel encoder ensures robustness against channel noise.

The Source Encoder and Decoder The Source Encoder reduces/eliminates any coding, interpixel or psychovisual redundancies. The Source Encoder contains 3 processes: Mapper: Transforms the image into array of coefficients reducing interpixel redundancies. This is a reversible process which is not lossy. Quantizer: This process reduces the accuracy and hence psychovisual redundancies of a given image. This process is irreversible and therefore lossy.

Symbol Encoder: This is the source encoding process where fixed or variable-length code is used to represent mapped and quantized data sets. This is a reversible process (not lossy). Removes coding redundancy by assigning shortest codes for the most frequently occurring output values. interpixel redundancies : Mapper, psychovisual redundancies : Quantizer, coding redundancies : Symbol encoder. The Source Decoder contains two components. Symbol Decoder: This is the inverse of the symbol encoder and reverse of the variable-length coding is applied. Inverse Mapper : Inverse of the removal of the interpixel redundancy. The only lossy element is the Quantizer which removes the psychovisual redundancies causing irreversible loss. Every Lossy Compression methods contains the quantizer module. If error-free compression is desired the quantizer module is removed.

Error-Free Compression Compression Types Compression Lossy Compression Error-Free Compression (Loss-less)

Error-Free Compression Some applications require no error in compression (medical, business documents, etc..) CR=2 to 10 can be expected. Make use of coding redundancy and inter-pixel redundancy. Ex: Huffman codes, LZW, Arithmetic coding, 1D and 2D run-length encoding, Loss-less Predictive Coding, and Bit-Plane Coding.

Huffman Coding The most popular technique for removing coding redundancy is due to Huffman (1952) Huffman Coding yields the smallest number of code symbols per source symbol The resulting code is optimal

Huffman Codes

Huffman Codes

Fixed Length: LZW Coding Error Free Compression Technique Remove Inter-pixel redundancy Requires no priori knowledge of probability distribution of pixels Assigns fixed length code words to variable length sequences Included in GIF and TIFF and PDF file formats

LZW Coding Coding Technique A codebook or a dictionary has to be constructed For an 8-bit monochrome image, the first 256 entries are assigned to the gray levels 0,1,2,..,255. As the encoder examines image pixels, gray level sequences that are not in the dictionary are assigned to a new entry. For instance sequence 255-255 can be assigned to entry 256, the address following the locations reserved for gray levels 0 to 255.

Dictionary Location Entry LZW Coding Example Consider the following 4 x 4 8 bit image 39 39 126 126 Dictionary Location Entry 0 0 1 1 . . 255 255 256 - 511 - Initial Dictionary

Dictionary Location Entry LZW Coding 39 39 126 126 Is 39 in the dictionary……..Yes What about 39-39………….No Then add 39-39 in entry 256 And output the last recognized symbol…39 Dictionary Location Entry 0 0 1 1 . . 255 255 256 - 511 - 39-39

Example Code the following image using LZW codes 39 39 126 126 39 39 126 126 * How can we decode the compressed sequence to obtain the original image ?

LZW Coding

Arithmetic coding: It generates non block codes Arithmetic coding: It generates non block codes. One to One correspondence between source symbols and code words does not exist. Instead, an entire sequence of source symbols is assigned a single arithmetic code. Code word defines an integer of real numbers between 0 & 1. As No. of symbols in msg. interval to represent it no. of bits to represent info Each symbol of msg size of interval in accordance with its probability of occurrence.

Basic Arithmetic coding process: 5 symbol message, a1a2a3a3a4 from 4 symbol source is coded. Source Symbol Probability Initial Subinterval a1 0.2 [0.0, 0.2) a2 [0.2, 0.4) a3 0.4 [0.4, 0.8) a4 [0.8, 1.0)

Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0 Arithmetic coding: a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0 Arithmetic coding: a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0 Arithmetic coding: a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0 Arithmetic coding: a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0 Arithmetic coding: a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Arithmetic coding: Encoding Sequence  a1 a2 a3 a3 a4 1 0. 2 0. 08 0 Arithmetic coding: Encoding Sequence  a1 a2 a3 a3 a4 1 0.2 0.08 0.072 0.0688 a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 0 0 0.04 0.056 0.0624

Bit-Plane Coding An effective technique to reduce inter pixel redundancy is to process each bit plane individually The image is decomposed into a series of binary images. Each binary image is compressed using one of well known binary compression techniques.

Bit-Plane Decomposition

Bit-Plane Encoding Constant Area Coding One Dimensional Run Length coding Two Dimensional Run Length coding 1b 2w 1b 3w 4b 1w 12 w

Loss-less Predictive Encoding

Loss-less Predictive Encoding

Lossy Compression

Lossy Compression

DPCM(Differential Pulse Coded Modulation )

DPCM

Predictive Coding Revisited - Encoding x1 x2 … … xN y1 y2 … … yN y1=x1 yn=xn-xn-1, n=2,…,N - Decoding y1 y2 … … yN x1 x2 … … xN x1=y1 xn=yn+xn-1, n=2,…,N xn yn yn xn _ + xn-1 xn-1 xn D + D xn-1 Decoder DPCM Loop Encoder

Open-loop DPCM ^ ^ xn yn yn xn _ Q + ^ xn-1 xn-1 xn D + D xn-1 Decoder Encoder Notes:  Prediction is based on the past unquantized sample  Quantization is located outside the DPCM loop

Catastrophic Error Propagation with Open-loop DPCM -1 error caused by quantization original samples 90 92 91 93 93 95 … a b prediction residues a-b 90 2 2 2 a b decoded samples 90 92 92 94 94 96 … a+b

Closed-loop DPCM ^ ^ ^ xn yn yn yn xn _ Q + xn-1 ^ ^ ^ xn-1 xn D + D ^ Decoder Encoder Xn,yn: unquantized samples and prediction residues ^ ^ Xn,yn: decoded samples and quantized prediction residues Notes:  Prediction is based on the past decoded sample  Quantization is located inside the DPCM loop

Numerical Example xn 90 92 91 93 93 95 … yn 90 2 -2 3 2 90 3 -3 3 3 yn 2 a-b Q b 90 3 -3 3 3 yn ^ a b a+b xn ^ 90 93 90 93 93 96 …

Closed-loop DPCM Analysis ^ xn yn yn _ Q A ^ ^ B xn-1 xn D + ^ xn-1 A: B: The distortion introduced to prediction residue yn is identical to that introduced to the original sample xn

DPCM Summary Open-loop DPCM Closed-loop DPCM Prediction is based on original unquantized samples Catastrophic error propagation problems (decoder does not have access to the original but quantized samples to do prediction) Closed-loop DPCM Both encoder and decoder employ decoded samples to do prediction Quantization noise affects the accuracy of prediction DPCM is only suitable for lossy image coding at high bit rate (small quantization noise)

Transform Coding A reversible linear transform (such as Fourier Transform) is used to map the image into a set of transform coefficients These coefficients are then quantized and coded. The goal of transform coding is to de-correlate pixels and pack as much information into small number of transform coefficients. Compression is achieved during quantization not during the transform step

Transform Coding

2D Transforms Energy packing 2D transforms pack most of the energy into small number of coefficients located at the upper left corner of the 2D array Energy Packing

2D Transforms Consider an image f(x,y) of size N x N Forward transform g(x,y,u,v) is the forward transformation kernel or basis functions

2D Transforms Inverse transform h(x,y,u,v) is the inverse transformation kernel or basis functions

Discrete Cosine Transform One of the most frequently used transformations for image compression is the DCT. for u=0 for u=1, 2, …, N-1

Discrete Cosine Transform

2D Transforms

Effect of Window Size

Quantization Quantizer

Quantization Each transformed coefficient is quantized

Quantization

Bit allocation and Zig Zag Ordering

DCT and Quantization Right Column

Wavelet Coding

Wavelet Transform 1 2 3 4 Put a pixel in each quadrant- No size change

Wavelet Transform Now let a = (x1+x2+x3+x4)/4 b =(x1+x2-x3-x4)/4 c =(x1+x3-x2-x4)/4 d =(x1+x4-x2-x3)/4 a b c d

Wavelet Transform

Wavelet Transform

Wavelet Transform

Wavelet Coding High Frequency coefficients tend to be very small --- 0 They can be quantized very effectively without distorting the results

Wavelet Transform DCT Wavelet

Wavelet Transform

Image Compression Standards Binary Compression Standards CCITT G3 -> 1D Run Length Encoding CCITT G4 -> 2D Run Length encoding JBIG1 -> Lossless adaptive binary compression JBIG2 -> Lossy/Lossless adaptive binary compression

JBIG/JBIG2

Image Compression Standards Continuous Tone Still Image Compression Standards JPEG JPEG 2000 Mixed Raster Content (MRC)

MRC

Video Compression