Download presentation
Presentation is loading. Please wait.
Published byCora Harrington Modified over 9 years ago
1
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben
2
Outline: Introduction. Image Compression Model. Compression Types. Data Redundancy. Redundancy Types. Coding redundancy Lossless compression
3
Introduction Most data nowadays are available on line and for the limited storage space and communications requirement, methods of compressing data prior to storage and/or transmission are being interesting study field. Image compression address the problem of reducing the amount of data required to represent a digital image. Image compression is done prior to storage and/or transmission and then decompressed to reconstruct the original image.
4
Image Compression Model Source encoder: removes input redundancy. Channel encoder: increase the noise immunity of source encoder output. Channel: if it is noise free the channel encoder and channel decoder is omitted.
5
Compression Types 1) Lossy image compression: is useful in applications such as broadcasting television and video conferencing in which certain amount of error is acceptable trade off for increased compression performance. 2) Lossless image compression: useful in image archiving such as medical records where the image will be compressed and decompressed without losing any information.
6
Data redundancy [1]
7
Data redundancy [2]
8
Redundancy Types 1) Coding redundancy. 2) Interpixel redundancy. 3) Psychvisual redundancy. Data compression is done if one or more of these types are achived.
9
Coding Redundancy
10
Coding Redundancy [example]
12
Lossless Compression Assigning fewer bits to the more probabilty gray level than the least probable ones achive data compression which is called “variable- length coding” Lossless compression “Huffman code” is a kind of the variable length coding.
13
Lossless Compression “Huffman code example”. The letters A,B,C,D, and E are to be encoded and have relative probability of occurrence as follows: p(A)=0.16, p(B)=0.51, p(C)=0.09, p(D)=0.13, p(E)=0.11 the two characters with the lowest probability are combined in the first binary tree which has the characters as leaves. p(CE)=0.20 Each right branch 1 and each left branch 0
14
“Huffman code example”.
15
Lossless Compression “Run- length encoding” Lossless compression. The Idea of run-length encoding is replaceing long sequences (runs) of identical samples with a special code that indicates the value to be repeated and the number of times repeated. 1110010000 RLE (3,1)(2,0)(1,1)(4,0)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.