DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF JOENSUU JOENSUU, FINLAND Image Compression Lecture 9 Optimal Scalar Quantization Alexander Kolesnikov.

Slides:



Advertisements
Similar presentations
Variable Metric For Binary Vector Quantization UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE JOENSUU, FINLAND Ismo Kärkkäinen and Pasi Fränti.
Advertisements

Determinant Sums (and Labeled Walks) for Undirected Hamiltonicity
Section 9.1 – Sequences.
Approximations of points and polygonal chains
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Experiments We measured the times(s) and number of expanded nodes to previous heuristic using BFBnB. Dynamic Programming Intuition. All DAGs must have.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
1 s-t Graph Cuts for Binary Energy Minimization  Now that we have an energy function, the big question is how do we minimize it? n Exhaustive search is.
Lectures on Network Flows
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Chapter 7 Dynamic Programming 7.
§ 8 Dynamic Programming Fibonacci sequence
1 Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a 1, a 2, …, a n } and a size.
Dynamic Programming Reading Material: Chapter 7..
Segmentation Graph-Theoretic Clustering.
1 Advanced Algorithms All-pairs SPs DP algorithm Floyd-Warshall alg.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a1, a2, …, an} and a size s(a)
Part 3 Vector Quantization and Mixture Density Model CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Optimal binary search trees
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Theory I Algorithm Design and Analysis (11 - Edit distance and approximate string matching) Prof. Dr. Th. Ottmann.
1 Topology Control of Multihop Wireless Networks Using Transmit Power Adjustment Infocom /12/20.
Design and Analysis of Algorithms Lecture Dynamic programming Alexander Kolesnikov DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF JOENSUU JOENSUU,
Clustering methods Course code: Pasi Fränti Speech & Image Processing Unit School of Computing University of Eastern Finland Joensuu,
Chapter 9 Superposition and Dynamic Programming 1 Chapter 9 Superposition and dynamic programming Most methods for comparing structures use some sorts.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Dynamic Programming.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Contents of Chapter 5 Chapter 5 Dynamic Programming
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 31 Alternative System Description If all w k are given initially as Then,
1 Optimal Cycle Vida Movahedi Elder Lab, January 2008.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing Prof. Charlene Tsai.
FAST DYNAMIC QUANTIZATION ALGORITHM FOR VECTOR MAP COMPRESSION Minjie Chen, Mantao Xu and Pasi Fränti University of Eastern Finland.
Efficient algorithms for polygonal approximation
1 Sequence Alignment Input: two sequences over the same alphabet Output: an alignment of the two sequences Example: u GCGCATGGATTGAGCGA u TGCGCCATTGATGACCA.
Reference line approach in vector data compression Alexander Akimov, Alexander Kolesnikov and Pasi Fränti UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER.
Problem solving by search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Pipelined and Parallel Computing Partition for 1 Hongtao Du AICIP Research Nov 3, 2005.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
Multilevel thresholding by fast PNN based algorithm UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE FINLAND Olli Virmajoki and Pasi Fränti.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing January 13, 2004 Prof. Charlene Tsai.
Iterative K-Means Algorithm Based on Fisher Discriminant UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE JOENSUU, FINLAND Mantao Xu to be presented.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Energy-efficient Scheduling policy for collaborative execution in mobile cloud computing INFOCOM '13.
Normalized Cuts and Image Segmentation Patrick Denis COSC 6121 York University Jianbo Shi and Jitendra Malik.
Divide-and-Conquer MST
Lecture 5 Dynamic Programming
Analytics and OR DP- summary.
Paper Code : BCA-27 Paper Title : Discrete Mathematics.
Seminar on Dynamic Programming.
Haim Kaplan and Uri Zwick
Lectures on Network Flows
Scalar Quantization – Mathematical Model
Lecture 5 Dynamic Programming
CS330 Discussion 6.
Segmentation Graph-Theoretic Clustering.
CS 3343: Analysis of Algorithms
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
ICS 353: Design and Analysis of Algorithms
Advanced Analysis of Algorithms
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Hierarchical Clustering
Dynamic Programming.
Seminar on Dynamic Programming.
Presentation transcript:

DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF JOENSUU JOENSUU, FINLAND Image Compression Lecture 9 Optimal Scalar Quantization Alexander Kolesnikov

Quantization error for discrete data X C2C2 C1C1 CMCM... Quantization error for the data X with M cells: Cell’s centroids:

Optimal scalar quantization: History J.D.Bruce [1963]: O(MN 2 ) DP X. Wu [1991]: O(MN 2 )  O(MN) DP+Monge X,Y,Z [2001]: O(N M )  O(N M-1 ) Exhaustive search in paper ”A fast algorithm for multilevel thresholding” Some researchers still believe that complexity of the optimal algorithm is exponential, … and some researchers are still re-inventing the optimal DP algorithm of O(MN 2 ) complexity.

Problem formulation Let X={x 1, x 2, …, x N } be a finite ordered set of real numbers ( intensity values ). Let P={p 1, p 2, …, p N } be the correspondent set of a probabilities for the values X (histogram). Let {r 0,r 1,r 2, …,r M+1 } be an ordered set of integers such that defines a partition of the set X into M parts: r 0 = 0 < r 1 <... < r j < r j+1 <... < r M = N. x 1 x 2 xNxN p1p1 p2p2

Sequence partition problem Quantization error for one (part) cell: Cell’s centroid y j : Partition indices: r 0 = 0 < r 1 <... < r j <... < r M =N. We introduced r 0 = 0 for x 0 = . The total quantization error:

Scheme of partition into cells Quantization error for the cell #2 (j=2): Data: x 0 =  < x 1 <... < x j <... < x N Partition indices: r 0 = 0 < r 1 <... < r j <... < r M =N. Cells:... ]( ] ( #1 #2 #3 (x 0 =  ) x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14 x N (r 0 =0) r 1 =4 r 2 =10 r M =N=15 N=15, the number of cells M=3

Optimization task For a given data X, probabilities P and number of cells M find such a partition {r o,r 1, r 2, …, r M } that the total quantization error is minimal: where and

Cost function D M (0,N] Let us introduce cost function D m (0,n] that is minimum quantization error for quantization of data sub-set X n ={x 1, x 2, …, x n } with m cells: Then D M (0,N] gives us solution of the problem in question.

Dynamic programming approach In other words: Let’s rewrite the cost function:

Reccurent equations Initialization: Recursion:

Calculation of quantization error for a cell Complexity of the quantization error calculation is O(N) Can we calculate it faster?

Calculation of quantization error for a cell

where cumulants S 0 (n), S 1 (n), S 2 (n) are defined as follows: Complexity of quantization error calculation for one cell is O(1).

DP search in the state space  M N1 0 b n m j State space  C(n,m) C(j,m-1) C(N,M) Start state e 2 (j,n ) C(n,m) = min {C(m, m-1) + e 2 (m, n], C(m+1,m-1) + e 2 (m+1,n],... C(j,m-1) + e 2 (j, n]* ),... C(n-1, m-1) + e 2 (n, n]} m-1 * A(n,m)=j opt

Scheme of the DP algorithm // Initialization FOR n = 1 TO N DO C(n,1)= e 2 (j,n] // Minimum search FOR m = 2 TO M DO FOR n = m TO N DO d min   FOR j= m-1 TO n-1 DO c  C(j, m  1) + e 2 (j,n] IF(c < c min ) c min  c; j min  j ENDIF ENDFOR C(n, m)  d min A(n, m)  j min ENDFOR Complexity: O(MN 2 ) C(n,m)=D m (0,n]

Backtrack in the state space  M N1 0 b n m j State space  A(N,M) Start state m-1 S(M+1)= N FOR m = M+1 TO 2 DO S(m  1) = A(S(m), m)) E 2  C(N,M) S(M+1)= N FOR m = M+1 TO 2 DO S(m  1) = A(S(m), m)) E 2  C(N,M) N=22, M=8: S={22,18,14,12,9,6,4,3,1} (x 0,x 3 ], (x 3,x 4 ], (x 4,x 6 ], (x 6,x 9 ], (x 9,x 12 ], (x 12,x 14 ], (x 14,x 18 ], (x 18,x 22 ]

Optimal scalar quantization DP algorithm a) Time complexity: O(MN 2 ) b) Space complexity: O(MN) Error balance property: e 2 (r j-1,r j ]  Const Optimal scalar quantizer as weighted k -link shortest path in directed acyclic graph (DAG) Wu [1991] reduced time complexity of optimal DP algorithm to O(MN) using Monge property of quantization error with L 2 metrics. Space complexity: O(N 2 ).

Can we do it faster? Wu [1991] reduced time complexity of optimal DP algorithm to O(MN) using Monge property (monotonicity property) of quantization error with L 2 metrics. Space complexity: O(N 2 ).

Example 1: M=3 Input image Uniform Optimal

Example 1: M=3 Uniform Optimal

Example 2: M=5 Optimal Uniform

Example: M=5 Uniform Optimal

Example: M=12 Centroid dencity is higher when probability dencity is also higher... Centroid dencity is higher when probability dencity is also higher...

Summary 1) Optimal scalar quantization