Download presentation
Presentation is loading. Please wait.
Published byJustin Hubbard Modified over 8 years ago
1
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure H(S) tells us what the minimal average code word length is of any – instantaneously or non-instantaneously decodable – uniquely decodable – nonsingular – binary block code that can be designed for the source S (without telling how to find that code)
2
But How Do We Get That Code? A number of construction recipes are known, usually named after their “inventor” – Shannon code – Fano code – Gilbert-Moore code – Arithmetic code – Huffman code This one is used very often in compression Often in combination with run-length code
3
Huffman Binary Code Construction (1) 1.Rank the symbols in order of decreasing probability 2. Join the two least probable symbols and add their probabilities to form a new “joined symbol” 3. Re-arrange the new set of probabilities in decreasing order 4. Repeat step 2 and 3 until two probabilities remain 5. Assign a bit “0” to one of the probabilities, and a bit “1” to the other. 6. Go backwards and add one bit at each place where two symbols were joined 7. Create the code words by following the path to that symbol from right to left.
4
Huffman Coding
6
Simple binary coding would require 3 bit/symbol H(S) = 2.024 bit/symbol L = 2.204 bit/symbol Huffman Coding
7
Variable-Length Coding Methods
8
Arithmetic coding A one to one correspondence between source symbols and code words does not exist. Instead, an entire sequence of source symbols is assigned a single arithmetic code word.
9
Binary Image Compression Binary Gray Coded
10
Lossless Predictive Coding The approach is based on eliminating the interpixel redundancies of closely spaced pixels by extracting and coding only the new information in each pixel. In 1-D linear predictive coding, In 2-D linear predictive coding, the prediction is a function of the previous pixels in a left-to-right, top-to-bottom scan of an image. The resulting compression will be limited to approximately 2:1.
11
Lossless Predictive Coding
12
Lossy Compression Lossy Predictive Coding: If distortion can be tolerated, the increase in compression can be significant. Recognizable monochorome images from data that have been compressed by more than 100:1, images that are virtually indistinguishable from the originals at 10:1 or 50:1.
13
Lossy Predictive Coding Delta Modulation:
14
Lossy Predictive Coding
15
Optimal Quantization Lloyd-Max Quantizer:
16
Optimal Quantization Fixed vs. Adaptive Quantization:
17
Transform Coding A reversible, linear transform (such as Fourier transform) is used to map the image into a set of transform coefficients, which are then quantized and coded. For most natural images, a significant number of the coefficients have small magnitudes and can be coarsely quantized (or discarded entirely) with little image distortion.
19
Principle of Transform Coding Transform coding: Alternative approach – Break signal up into vectors – Samples are decorrelated within this vector – Quantize transformed samples (transform coefficients) independently and with simple quantizers (scalar quantization) Subband coding: Decompose signals into frequency bands (subband signals) that are – uncorrelated within the subband (“flat” spectrum): a simple quantizer can be applied – mutually uncorrelated: subbands can be quantized independently – have different variances (exploited by bit allocation)
20
Walsh-Hadamard Transform
21
Discrete Cosine Transform
22
Approximation by Transform COefficients
23
DFT vs. DCT: Blocking Artifacts
24
Subimage Blocksize Selection
28
DCT: Threshold vs. Zonal Coding
29
67:134:1 DCT: Threshold Coding with Normalization
30
Wavelet Coding
31
Chapter 7 Wavelets and Filterbanks Chapter 7 Wavelets and Filterbanks
32
Filterbanks in 2-D
33
Subband Coding in 2-D
34
Haar Wavelets in 2-D
35
Multiresolution Analysis
36
Multiresolution Synthesis
37
Wavelet Transform in 2-D
39
67:134:1 Wavelet Coding 167:1108:1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.