Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben
Outline: Introduction. Image Compression Model. Compression Types. Data Redundancy. Redundancy Types. Coding redundancy Lossless compression
Introduction Most data nowadays are available on line and for the limited storage space and communications requirement, methods of compressing data prior to storage and/or transmission are being interesting study field. Image compression address the problem of reducing the amount of data required to represent a digital image. Image compression is done prior to storage and/or transmission and then decompressed to reconstruct the original image.
Image Compression Model Source encoder: removes input redundancy. Channel encoder: increase the noise immunity of source encoder output. Channel: if it is noise free the channel encoder and channel decoder is omitted.
Compression Types 1) Lossy image compression: is useful in applications such as broadcasting television and video conferencing in which certain amount of error is acceptable trade off for increased compression performance. 2) Lossless image compression: useful in image archiving such as medical records where the image will be compressed and decompressed without losing any information.
Data redundancy [1]
Data redundancy [2]
Redundancy Types 1) Coding redundancy. 2) Interpixel redundancy. 3) Psychvisual redundancy. Data compression is done if one or more of these types are achived.
Coding Redundancy
Coding Redundancy [example]
Lossless Compression Assigning fewer bits to the more probabilty gray level than the least probable ones achive data compression which is called “variable- length coding” Lossless compression “Huffman code” is a kind of the variable length coding.
Lossless Compression “Huffman code example”. The letters A,B,C,D, and E are to be encoded and have relative probability of occurrence as follows: p(A)=0.16, p(B)=0.51, p(C)=0.09, p(D)=0.13, p(E)=0.11 the two characters with the lowest probability are combined in the first binary tree which has the characters as leaves. p(CE)=0.20 Each right branch 1 and each left branch 0
“Huffman code example”.
Lossless Compression “Run- length encoding” Lossless compression. The Idea of run-length encoding is replaceing long sequences (runs) of identical samples with a special code that indicates the value to be repeated and the number of times repeated RLE (3,1)(2,0)(1,1)(4,0)