Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Restoration AND Compression

Similar presentations


Presentation on theme: "Image Restoration AND Compression"— Presentation transcript:

1 Image Restoration AND Compression

2 Degraded Images

3

4 What is Image Restoration
The purpose of image restoration is to restore a degraded/distorted image to its original content and quality. Distinctions to Image Enhancement Image restoration assumes a degradation model that is known or can be estimated. Original content and quality ≠ Good looking

5

6 Image enhancement vs. Image Restoration
Image restoration assumes a degradation model that is known or can be estimated. •Original content and quality does not mean: Good-looking or appearance. •Image Enhancement is subjective, whereas image restoration is objective process. •Image restoration try to recover original image from degraded with prior knowledge of degradation process. •Restoration involves modeling of degradation and applying the inverse process in order to recover the original image. •Although the restore image is not the original image, its approximation of actual image.

7 Degradation Model Objective: To restore a degraded/distorted image to its original content and quality. Spatial Domain: g(x,y)=h(x,y)*f(x,y)+ŋ(x,y) Frequency Domain: G(u,v)=H(u,v)F(u,v)+ ŋ(u,v) Matrix: G=HF+ŋ

8 Noise Models Most types of noise are modeled as known probability density functions Noise model is decided based on understanding of the physics of the sources of noise. Gaussian: poor illumination Rayleigh: range image Gamma, exp: laser imaging Impulse: faulty switch during imaging, Uniform is least used. Parameters can be estimated based on histogram on small flat area of an image

9

10 Gaussian Rayleigh Gamma

11 Exponential Uniform Salt & Pepper

12 Noise Removal Restoration Method
Mean filters Arithmetic mean filter Geometric mean filter Harmonic mean filter Contra-harmonic mean filter Order statistics filters Median filter Max and min filters Mid-point filter alpha-trimmed filters Adaptive filters Adaptive local noise reduction filter Adaptive median filter

13 Inverse Filter Recall the degradation model:
Given H(u,v), one may directly estimate the original image by At (u,v) where H(u,v)  0, the noise N(u,v) term will be amplified! Invfildemo.m

14 Wiener Filtering Minimum mean-square error filter
Assume f and  are both 2D random sequences, uncorrelated to each other. Goal: to minimize Solution: Frequency selective scaling of inverse filter solution! White noise, unknown Sf(u,v):

15 Constrained Least Square (CLS) Filter
For each pixel, assume the noise  has a Gaussian distribution. This leads to a likelihood function: A constraint representing prior distribution of f will be imposed: the exponential form of pdf of f is known as the Gibbs’ distribution. Since L(f)  p(g|f), use Bayes rule, since g is given, to maximize the posterior probability, one should minimize q is an operator based on prior knowledge about f. For example, it may be the Laplacian operator!

16 Intuitive Interpretation of CLS
Prior knowledge: Most images are smooth  ||q**f|| should be minimized However, the restored image , after going through the same degradation process h, should be close to the given degraded image g. The difference between g and is bounded by the amount of the additive noise: In practice, |||| is unknown and needs to be estimated with the variance of the noise

17 Solution and Iterative Algorithm
Iterative algorithm (Hunt) 1. Set initial value of , 2. Find , and compute R(u,v). 3. If ||R||2 - ||N||2 < - a, set  = BL, increase , else if ||R||2 - ||N||2 > a, set  = Bu, decrease  , else stop iteration. 4. new = (Bu+BL)/2, go to step 2. To minimize CCLS, Set CCLS/ F = 0. This yields The value of  however, has to be determined iteratively! It should be chosen such that

18 CLS Demonstration

19 Image Compression

20 Image Compression Everyday an enormous amount of information is stored, processed, and transmitted Financial data Reports Inventory Cable TV Online Ordering and tracking

21 Image Compression Because much of this information is graphical or pictorial in nature, the storage and communications requirements are immense. Image compression addresses the problem of reducing the amount of data requirements to represent a digital image. Image Compression is becoming an enabling technology: HDTV. Also it plays an important role in Video Conferencing, remote sensing, satellite TV, FAX, document and medical imaging.

22 Image Compression We want to remove redundancy from the data
Mathematically Transformation 2D array Of pixels Statistically Uncorrelated data

23 Image Compression Outline: Fundamentals Error-Free Compression
Coding Redundancy Interpixel Redundancy Psychovisual Redundancy Fidelity Criteria Error-Free Compression Variable-length Coding LZW Coding Predictive Coding Lossy Compression Transform Coding Wavelet Coding Image Compression Standards

24 Fundamentals The term data compression refers to the process of reducing the amount of data required to represent a given quantity of information Data Information Various amount of data can be used to represent the same information Data might contain elements that provide no relevant information : data redundancy Data redundancy is a central issue in image compression. It is not an abstract concept but mathematically quantifiable entity

25

26 Data Redundancy Let n1 and n2 denote the number of information carrying units in two data sets that represent the same information The relative redundancy RD is define as : where CR, commonly called the compression ratio, is

27 Data Redundancy If n1 = n2 , CR=1 and RD=0 no redundancy
If n1 >> n2 , CR and RD high redundancy If n1 << n2 , CR and RD undesirable A compression ration of 10 (10:1) means that the first data set has 10 information carrying units (say, bits) for every 1 unit in the second (compressed) data set. The corresponding redundancy of 0.9 says that 90% (RD=0.9) of the data in the first data set is redundant. In Image compression , 3 basic redundancy can be identified Coding Redundancy Interpixel Redundancy Psychovisual Redundancy

28 Coding Redundancy Recall from the histogram calculations
where p(rk) is the probability of a pixel to have a certain value rk If the number of bits used to represent rk is l(rk), then

29 Coding Redundancy Example:

30 Coding Redundancy Variable-Length Coding

31 Inter-pixel Redundancy
Here the two pictures have Approximately the same Histogram. We must exploit Pixel Dependencies. Each pixel can be estimated From its neighbors.

32 Interpixel redundancy
The intensity at a pixel may correlate strongly with the intensity value of its neighbors. Because the value of any given pixel can be reasonably predicted from the value of its neighbors Much of the visual contribution of a single pixel to an image is redundant; it could have been guessed on the bases of its neighbors values. We can remove redundancy by representing changes in intensity rather than absolute intensity values .For example , the differences between adjacent pixels can be used to represent an image . Transformation of this type are referred to as mappings. They are called reversible if the original image elements can be reconstructed from the transformed data set. For example the sequence (50,50, ..50) becomes (50, 4).

33 Run-Length Encoding Example of Inter-pixel Redundancy removal

34 Psycho-visual Redundancy
The human visual system is more sensitive to edges Middle Picture: Uniform quantization from 256 to 16 gray levels CR= 2 Right picture: Improved gray level quantization (IGS)

35 Fidelity Criteria Removal of irrelevant visual information involves a loss of real or quantitative image information. Since information is lost, a means of quantifying the nature of loss is needed. Objective fidelity criteria and Subjective fidelity criteria Objective fidelity criteria: When information loss can be expressed as a mathematical function of input & output of a compression process. E.g. RMS error between 2 images.

36 Fidelity Criteria Subjective fidelity criteria:
A Decompressed image is presented to a cross section of viewers and averaging their evaluations. It can be done by using an absolute rating scale Or By means of side by side comparisons of f(x, y) & f’(x, y). Side by Side comparison can be done with a scale such as {-3, -2, -1, 0, 1, 2, 3} to represent the subjective valuations {much worse, worse, slightly worse, the same, slightly better, better, much better} respectively.

37 Fidelity Criteria: Objective fidelity criteria Example (RMS)
The error between two functions is given by: So, the total error between the two images is The root-mean-square error averaged over the whole image is

38 Fidelity Criteria A closely related objective fidelity criterion is the mean square signal to noise ratio of the compressed- decompressed image

39 Fidelity Criteria

40 Compression Model The source encoder is responsible for removing redundancy (coding, inter-pixel, psycho-visual) The channel encoder ensures robustness against channel noise.

41 The Source Encoder and Decoder
The Source Encoder reduces/eliminates any coding, interpixel or psychovisual redundancies. The Source Encoder contains 3 processes: Mapper: Transforms the image into array of coefficients reducing interpixel redundancies. This is a reversible process which is not lossy. Quantizer: This process reduces the accuracy and hence psychovisual redundancies of a given image. This process is irreversible and therefore lossy.

42 Symbol Encoder: This is the source encoding process where fixed or variable-length code is used to represent mapped and quantized data sets. This is a reversible process (not lossy). Removes coding redundancy by assigning shortest codes for the most frequently occurring output values. interpixel redundancies : Mapper, psychovisual redundancies : Quantizer, coding redundancies : Symbol encoder. The Source Decoder contains two components. Symbol Decoder: This is the inverse of the symbol encoder and reverse of the variable-length coding is applied. Inverse Mapper : Inverse of the removal of the interpixel redundancy. The only lossy element is the Quantizer which removes the psychovisual redundancies causing irreversible loss. Every Lossy Compression methods contains the quantizer module. If error-free compression is desired the quantizer module is removed.

43 Error-Free Compression
Compression Types Compression Lossy Compression Error-Free Compression (Loss-less)

44 Error-Free Compression
Some applications require no error in compression (medical, business documents, etc..) CR=2 to 10 can be expected. Make use of coding redundancy and inter-pixel redundancy. Ex: Huffman codes, LZW, Arithmetic coding, 1D and 2D run-length encoding, Loss-less Predictive Coding, and Bit-Plane Coding.

45 Huffman Coding The most popular technique for removing coding redundancy is due to Huffman (1952) Huffman Coding yields the smallest number of code symbols per source symbol The resulting code is optimal

46 Huffman Codes

47 Huffman Codes

48 Fixed Length: LZW Coding
Error Free Compression Technique Remove Inter-pixel redundancy Requires no priori knowledge of probability distribution of pixels Assigns fixed length code words to variable length sequences Included in GIF and TIFF and PDF file formats

49 LZW Coding Coding Technique
A codebook or a dictionary has to be constructed For an 8-bit monochrome image, the first 256 entries are assigned to the gray levels 0,1,2,..,255. As the encoder examines image pixels, gray level sequences that are not in the dictionary are assigned to a new entry. For instance sequence can be assigned to entry 256, the address following the locations reserved for gray levels 0 to 255.

50 Dictionary Location Entry
LZW Coding Example Consider the following 4 x 4 8 bit image Dictionary Location Entry 0 0 1 1 . . Initial Dictionary

51 Dictionary Location Entry
LZW Coding Is 39 in the dictionary……..Yes What about 39-39………….No Then add in entry 256 And output the last recognized symbol…39 Dictionary Location Entry 0 0 1 1 . . 39-39

52 Example Code the following image using LZW codes 39 39 126 126
* How can we decode the compressed sequence to obtain the original image ?

53 LZW Coding

54 Arithmetic coding: It generates non block codes
Arithmetic coding: It generates non block codes. One to One correspondence between source symbols and code words does not exist. Instead, an entire sequence of source symbols is assigned a single arithmetic code. Code word defines an integer of real numbers between 0 & 1. As No. of symbols in msg. interval to represent it no. of bits to represent info Each symbol of msg size of interval in accordance with its probability of occurrence.

55 Basic Arithmetic coding process: 5 symbol message, a1a2a3a3a4 from 4 symbol source is coded.
Source Symbol Probability Initial Subinterval a1 0.2 [0.0, 0.2) a2 [0.2, 0.4) a3 0.4 [0.4, 0.8) a4 [0.8, 1.0)

56 Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0
Arithmetic coding: a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

57 Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0
Arithmetic coding: a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

58 Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0
Arithmetic coding: a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

59 Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0
Arithmetic coding: a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

60 Arithmetic coding: a1 a2 a3 a3 a4 1 0. 2 0. 08 0. 072 0
Arithmetic coding: a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

61 Arithmetic coding: Encoding Sequence  a1 a2 a3 a3 a4 1 0. 2 0. 08 0
Arithmetic coding: Encoding Sequence  a1 a2 a3 a3 a a4 a4 a4 a4 a4 a3 a3 a3 a3 a3 a2 a2 a2 a2 a2 a1 a1 a1 a1 a

62 Bit-Plane Coding An effective technique to reduce inter pixel redundancy is to process each bit plane individually The image is decomposed into a series of binary images. Each binary image is compressed using one of well known binary compression techniques.

63 Bit-Plane Decomposition

64 Bit-Plane Encoding Constant Area Coding
One Dimensional Run Length coding Two Dimensional Run Length coding 1b w b w b w 12 w

65 Loss-less Predictive Encoding

66 Loss-less Predictive Encoding

67 Lossy Compression

68 Lossy Compression

69 DPCM(Differential Pulse Coded Modulation )

70 DPCM

71 Predictive Coding Revisited
- Encoding x1 x2 … … xN y1 y2 … … yN y1=x1 yn=xn-xn-1, n=2,…,N - Decoding y1 y2 … … yN x1 x2 … … xN x1=y1 xn=yn+xn-1, n=2,…,N xn yn yn xn _ + xn-1 xn-1 xn D + D xn-1 Decoder DPCM Loop Encoder

72 Open-loop DPCM ^ ^ xn yn yn xn _ Q + ^ xn-1 xn-1 xn D + D xn-1 Decoder
Encoder Notes:  Prediction is based on the past unquantized sample  Quantization is located outside the DPCM loop

73 Catastrophic Error Propagation with Open-loop DPCM
-1 error caused by quantization original samples 90 92 91 93 93 95 a b prediction residues a-b 90 2 2 2 a b decoded samples 90 92 92 94 94 96 a+b

74 Closed-loop DPCM ^ ^ ^ xn yn yn yn xn _ Q + xn-1 ^ ^ ^ xn-1 xn D + D ^
Decoder Encoder Xn,yn: unquantized samples and prediction residues ^ ^ Xn,yn: decoded samples and quantized prediction residues Notes:  Prediction is based on the past decoded sample  Quantization is located inside the DPCM loop

75 Numerical Example xn 90 92 91 93 93 95 … yn 90 2 -2 3 2 90 3 -3 3 3 yn
2 a-b Q b 90 3 -3 3 3 yn ^ a b a+b xn ^ 90 93 90 93 93 96

76 Closed-loop DPCM Analysis
^ xn yn yn _ Q A ^ ^ B xn-1 xn D + ^ xn-1 A: B: The distortion introduced to prediction residue yn is identical to that introduced to the original sample xn

77 DPCM Summary Open-loop DPCM Closed-loop DPCM
Prediction is based on original unquantized samples Catastrophic error propagation problems (decoder does not have access to the original but quantized samples to do prediction) Closed-loop DPCM Both encoder and decoder employ decoded samples to do prediction Quantization noise affects the accuracy of prediction DPCM is only suitable for lossy image coding at high bit rate (small quantization noise)

78 Transform Coding A reversible linear transform (such as Fourier Transform) is used to map the image into a set of transform coefficients These coefficients are then quantized and coded. The goal of transform coding is to de-correlate pixels and pack as much information into small number of transform coefficients. Compression is achieved during quantization not during the transform step

79 Transform Coding

80 2D Transforms Energy packing 2D transforms pack most of the energy
into small number of coefficients located at the upper left corner of the 2D array Energy Packing

81 2D Transforms Consider an image f(x,y) of size N x N Forward transform
g(x,y,u,v) is the forward transformation kernel or basis functions

82 2D Transforms Inverse transform
h(x,y,u,v) is the inverse transformation kernel or basis functions

83 Discrete Cosine Transform
One of the most frequently used transformations for image compression is the DCT. for u=0 for u=1, 2, …, N-1

84 Discrete Cosine Transform

85 2D Transforms

86 Effect of Window Size

87 Quantization Quantizer

88 Quantization Each transformed coefficient is quantized

89 Quantization

90 Bit allocation and Zig Zag Ordering

91 DCT and Quantization Right Column

92 Wavelet Coding

93 Wavelet Transform 1 2 3 4 Put a pixel in each quadrant- No size change

94 Wavelet Transform Now let a = (x1+x2+x3+x4)/4 b =(x1+x2-x3-x4)/4
c =(x1+x3-x2-x4)/4 d =(x1+x4-x2-x3)/4 a b c d

95 Wavelet Transform

96 Wavelet Transform

97 Wavelet Transform

98 Wavelet Coding High Frequency coefficients tend to be very small --- 0 They can be quantized very effectively without distorting the results

99 Wavelet Transform DCT Wavelet

100 Wavelet Transform

101 Image Compression Standards
Binary Compression Standards CCITT G3 -> 1D Run Length Encoding CCITT G4 -> 2D Run Length encoding JBIG1 -> Lossless adaptive binary compression JBIG2 -> Lossy/Lossless adaptive binary compression

102 JBIG/JBIG2

103 Image Compression Standards
Continuous Tone Still Image Compression Standards JPEG JPEG 2000 Mixed Raster Content (MRC)

104 MRC

105 Video Compression


Download ppt "Image Restoration AND Compression"

Similar presentations


Ads by Google