Download presentation
Presentation is loading. Please wait.
Published byKarin Williams Modified over 6 years ago
1
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant information, e.g., coding redundancy interpixel redundancy psychovisual redundancy - knowledge-based image contents description and structure modeling
2
Image Compression Code length - fixed length - variable length
-- The average code-length is calculated as: Pk(rk) is the probability of the occurrence of event rk, l(rk) is the code length of event rk
3
Image Compression Entropy coding
- Use code-words with different lengths to losslessly represent symbols with different probabilities - Why? Use small number of bits to represent more frequent symbols and use large number of bits to represent less frequent symbols so that the length of overall symbols can be reduced. - How: - Huffman coding - Arithmetic coding
4
Image Compression Example of variable length coding
r(k) p(r(k)) code L Code L2 r(0)= r(1)=1/ r(2)=2/ r(3)=3/ r(4)=4/ r(5)=5/ r(6)=6/ r(7)= L(avg)=2(0.19)+2(0.25)+2(0.21)+3(0.16)+4(0.08)+5(0.06)+6(0.03)+6(0.02) =2.7bits Redundancy: R(D)=1-2.7/3 = 0.1 = 10%
5
Image Compression Inter-pixel correlation
- pixel values can be guessed (predicted) on the basis of the value of its neighbors - The following redundancy can be reduced: -- spatial redundancy -- geometric redundancy -- inter-frame redundancy Human perception redundancy
6
Image Compression
7
Encoder and Decoder Example of image transmission
f(x,y) source encoder channel encoder f(x,y)’ source decoder Channel decoder Mapper quantizersymbol encoder Inverse Mapper symbol decoder
8
Information theory Information measurement Note:
Event E which occurs with probability P(E) contains I(E) units of information e.g., P(E) =1 I(E) =0 (which means there is no uncertainty for event E, it always happens) e.g., If we set the base as 2 -log2P(E) if we flip a coin P(E) = ½, then I(E) = -log2(1/2) = 1 ( “bit”, unit of information) (e.g., flipping a coin)
9
Information theory (cont’d)
Entropy of the source (uncertainty): H(Z) is the average amount of information; H(Z) uncertainty information If the source symbols occur with equal probability, the entropy is maximum Theoretically, H(Z) is the minimum code length for each symbol, which is the so-called Shannon’s first theorem for noiseless image coding
10
Information theory (cont’d)
Variable-length coding -- Huffman coding generates the smallest possible number of code -- example: Symbol Probability (source reduction) code A (code 0) A (code 1) A A A A
11
Coding System Example Lossless predictive coding
Lossy predictive coding Transform coding -- Example of lossless predictive coding: Compressed image Symbol encoder Image Encode: Predictor Compressed image Symbol decoder Image Decode: Predictor
12
Coding System Example Transform coding Variable-length coding Image
block DCT Quantizer Inverse Quantizer Inverse DCT Motion estimator Variable-length coding
13
Image Compression Quantization
-- Map a large number of input amplitude levels to a small number of amplitude levels with non-recoverable loss of quality -- Why? Reduce the amplitude levels can reduce the number of bits to represent each pixel -- How: - scalar quantization (SQ) - vector quantization (VQ)
14
Image Compression
15
Image Compression
16
Image Compression
17
Image Compression
18
Image Compression
19
Image Compression
20
Image Compression
21
Image Compression
22
Image Compression
23
Image Compression MPEG-4 TV/Film Computer MPEG4 Telecommunication
Interactive multimedia GUI Virtual environment HDTV Audio/Graphics/Video data TV/Film Computer MPEG4 Telecommunication Wireless, internet, WWW. ISDN, Cable, …
24
Image Compression MPEG-4 -- video object plane (VOP): VOP1 VOP2 VOP3
25
Image Compression MPEG-4
-- Structure of video object plane (VOP) encoder: VOP0 coding VOP1 Coding : MUX bitstream Input VOP definition VOPn coding
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.