Download presentation
Presentation is loading. Please wait.
Published byPreston Lawson Modified over 6 years ago
1
AN EFFICIENT IMAGE COMPRESSION ALGORITHM WITH GEOMETRIC WAVELETS & GEOMETRIC WAVELET PACKET USING PSO Guide Prof. Mrs.NISHAT KANVEL M.E
2
ABSTRACT For image compression, it is very necessary that the selection of transform should reduce the size of the resultant data as compared to the original data set . In this project, a new lossless image compression method is proposed. For continuous and discrete time cases, wavelet transform and wavelet packet transform has emerged as popular techniques. Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The performance of image compression using geometric wavelets is improved by using packetization technique and particle swarm optimization algorithm.
3
GOAL OF IMAGE COMPRESSION
Digital images require huge amounts of space for storage and large bandwidths for transmission. A 640 x 480 color image requires close to 1MB of space. The goal of image compression is to reduce the amount of data required to represent a digital image. Reduce storage requirements and increase transmission rates.
4
APPROACHES Lossless Information preserving Low compression ratios
Lossy Not information preserving High compression ratios Trade-off: image quality vs compression ratio
5
LOSSLESS COMPRESSION It can be reconstructed without distortion.
No quantizer involves in the compression procedure. Generally, the compression ratios range from 2 to 10. Trade-off relation between the compression ratio and the computational complexity.
6
IMAGE COMPRESSION MODEL (contd.)
Mapper: transforms input data in a way that facilitates reduction of interpixel redundancies.
7
IMAGE COMPRESSION MODEL (cont’d)
Quantizer: reduces the accuracy of the mapper’s output in accordance with some pre-established fidelity criteria. 7
8
IMAGE COMPRESSION MODEL (contd.)
Symbol encoder: assigns the shortest code to the most frequently occurring output values. 8
9
IMAGE COMPRESSION MODELS (contd.)
Inverse operations are performed. But quantization is irreversible in general.
10
WAVELET A small wave; ripple.
In signal processing community, wavelets are used to break the complicated signal into single component. Similarly in this context wavelets are used to break the dataset into simple component.
11
WAVELET TRANSFORMS Convert a signal into a series of wavelets
Provide a way for analyzing waveforms, bounded in both frequency and duration Allow signals to be stored more efficiently than by Fourier transform Be able to better approximate real-world signals Well-suited for approximating data with sharp discontinuities
12
PRINCIPLES OF WAVELET TRANSFORM
Split Up the Signal into a Bunch of Signals Representing the Same Signal, but all Corresponding to Different Frequency Bands Only Providing What Frequency Bands Exists at What Time Intervals
13
WAVELET FUNCTION spans the difference between any two adjacent scaling subspaces and for all that spans the space where The wavelet function can be expressed as a weighted sum of shifted, double-resolution scaling functions. That is, where the are called the wavelet function coefficients.
14
WAVELET TRANSFORMS IN TWO DIMENSION
two-dimensional decomposition
15
WAVELET PACKETS A wavelet packet is a more flexible decomposition.
16
WAVELET APPLICATIONS Typical Application Fields
1.Astronomy, 2.acoustics, 3.nuclear engineering, 4.sub-band coding, 5.signal and image processing, 6.neurophysiology, 7.music, magnetic resonance imaging, 8.speech discrimination, 9.optics, fractals, turbulence, 10.earthquake-prediction, radar, human vision,11.pure mathematics applications Sample Applications Identifying pure frequencies De-noising signals Detecting discontinuities and breakdown points Detecting self-similarity Compressing images
17
WAVELET APPLICATIONS(contd.)
Medical imaging Pictures less corrupted by patient motion than with Fourier methods Astrophysics Analyze clumping of galaxies to analyze structure at various scales, determine past & future of universe Analyze fractals, chaos, turbulence
18
DIFFERENCE WAVELET GEOMETRIC WAVELET It’s wave in small size.
Less complexity in image processing because of partitioned wavelet packet. Can easily distinguish the best wavelet packet with appreciable pixel intensity. Comparing the intensity value of pixel is possible because of partitioned by a straight line. It is a wave; ripple. Complexity in image processing because of large in packet size. Can’t distinguish the high intensity value of pixel in appreciably. It will be inconvenient for comparing the intensity value of pixel.
19
SIGNIFICANT STEPS The proposed algorithm consists of five steps as given below: 1) Input image 2) Sparse geometric representation extraction 3) Encoding 4) Packetization 5) Particle swarm optimization 6) Decoding
20
GEOMETRIC WAVELET A function f(x) over any (convex) region S. The function is approximated by some polynomial (of fixed degree), call it p(x). Split the region exactly in two by a straight line, so that the two remaining regions S1 and S2 are still convex. Over each subregions, by regression, solve for the polynomials (of fixed degree) p1(x) and p2(x) approximating f(x). Optimize the splitting so that the approximation error made by p1 and p2 is minimal. Then your two new wavelets are the polynomials p(x)-p1(x) and p(x)-p2(x) limited to S1 and S2 respectively. These wavelets are not continuous, but if f(x) is itself a polynomial (up to a fixed degree), then the wavelets vanish. Repeat this splitting for as long as you need to.
21
GEOMETRIC WAVELET SCHEMATIC
22
DESCRIPTION OF GEOMETRIC WAVELETS
(1) Geometry Image (2) 2D Wavelet Transf. (3) Extract Sub-square (4) Sample Geometry (5)Project points Geometry image Original surface
23
H V D Geometry image 2D Wavelet Transform (2) 2D Wavelet Tran
(3)Extract Sub-square (4) Sample Geometry (5)Project points Geometry image 2D Wavelet Transform H V D
24
H V D (3) Extract Sub-sq D 2D Wavelet Transform Zoom on D
(1) Geometry Image (2) 2D Wavelet Transf. (3) Extract Sub-sq (4) Sample Geometry (5)Project points 2D Wavelet Transform H V D Zoom on D D
25
(5)Project points Zoom on D D Sub-square (4) Sample Geometry
(1) Geometry Image (2) 2D Wavelet Transf. (3) Extract Sub-square (4) Sample Geometry (5)Project points Zoom on D D Sub-square
26
1D Signal Sub-square (5) Project Points (1) Geometry Image
(2) 2D Wavelet Transf. (3) Extract Sub-square (4) Sample Geometry (5) Project Points Sub-square 1D Signal
27
COMPARISON OF TRANSFORMATIONS
From p.10
28
RUN-LENGTH ENCODING Run-length encoding is probably the simplest method of compression. It can be used to compress data made of any combination of symbols. It does not need to know the frequency of occurrence of symbols and can be very efficient if data is represented as 0s and 1s. The general idea behind this method is to replace consecutive repeating occurrences of a symbol by one occurrence of the symbol followed by the number of occurrences. The method can be even more efficient if the data uses only two symbols (for example 0 and 1) in its bit pattern and one symbol is more frequent than the other.
29
5/11/2018 RUN-LENGTH CODING How to efficiently encode it? e.g. a row in a binary doc image: “ …” Run-length coding (RLC) Code length of runs of “0” between successive “1” run-length of “0” ~ # of “0” between “1” good if often getting frequent large runs of “0” and sparse “1” E.g., => (7) (0) (3) (1) (6) (0) (0) … … Assign fixed-length codeword to run-length in a range (e.g. 0~7) Or use variable-length code like Huffman to further improve RLC also applicable to general a data sequence with many consecutive “0” (or long runs of other values)
30
PROBABILITY MODEL OF RUN-LENGTH CODING
5/11/2018 PROBABILITY MODEL OF RUN-LENGTH CODING Assume “0” occurs independently w.p. p: (p is close to 1) Prob. of getting an L runs of “0”: possible runs L=0,1, …, M P( L = l ) = pL (1-p) for 0 l M-1 (geometric distribution) P( L = M ) = pM (when having M or more “0”) Avg. # binary symbols per zero-run Savg = L (L+1) pL (1-p) + M pM = (1 – pM ) / ( 1 – p ) Compression ratio C = Savg / log2 (M+1) = (1 – pM ) / [( 1–p ) log2(M+1)] Example p = 0.9, M=15 Savg = 7.94, C = 1.985, H = bpp Avg. run-length coding rate Bavg = 4 bit / 7.94 ~ bpp Coding efficiency = H / Bavg ~ 91%
31
Example for Runlength Coding
33
COMPRESSION After quantization the values are read from the table, and redundant 0s are removed. However, to cluster the 0s together, the process reads the table diagonally in a zigzag fashion rather than row by row or column by column. The reason is that if the picture does not have fine changes, the bottom right corner of the T table is all 0s. JPEG usually uses run-length encoding at the compression phase to compress the bit pattern resulting from the zigzag linearization.
34
Reading the table
35
PARTICLE SWARM OPTIMIZATION (PSO)
PSO is a robust stochastic optimization technique based on the movement and intelligence of swarms. PSO applies the concept of social interaction to problem solving. It was developed in 1995 by James Kennedy (social-psychologist) and Russell Eberhart (electrical engineer). It uses a number of agents (particles) that constitute a swarm moving around in the search space looking for the best solution. Each particle is treated as a point in a N-dimensional space which adjusts its “flying” according to its own flying experience as well as the flying experience of other particles.
36
PARTICLE SWARM OPTIMIZATION (PSO)
Each particle keeps track of its coordinates in the solution space which are associated with the best solution (fitness) that has achieved so far by that particle. This value is called personal best , pbest. Another best value that is tracked by the PSO is the best value obtained so far by any particle in the neighborhood of that particle. This value is called gbest. The basic concept of PSO lies in accelerating each particle toward its pbest and the gbest locations, with a random weighted accelaration at each time step as shown in figure.
37
PARTICLE SWARM OPTIMIZATION (PSO)
y x Concept of modification of a searching point by PSO sk : current searching point sk+1: modified searching point vk: current velocity vk+1: modified velocity vpbest : velocity based on pbest vgbest : velocity based on gbest
38
PARTICLE SWARM OPTIMIZATION (PSO)
Flow chart depicting the General PSO Algorithm Start Initialize particles with random position and velocity vectors. For each particle’s position (p) evaluate fitness Loop until all particles exhaust If fitness(p) better than fitness(pbest) then pbest= p Loop until max iter Set best of pBests as gBest Update particles velocity (eq. 1) and position (eq. 3) Stop: giving gBest, optimal solution.
39
DECODING It is reversible process of encoding process.
Thus the reconstruction of original image with minimal loss of information.
40
CONCLUSION Thus the image compression is done by geometric wavelet and it’s transform with PSO. Hence the comparison of PSNR of geometric wavelet and wavelet packet is performed for different compression ratio. So the compressed image obtained by inverse geometric wavelet transform.
41
THANK YOU
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.