Appropriate algorithms for Generalized Maximum Coverage

Slides:



Advertisements
Similar presentations
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
Advertisements

Approximation Algorithms
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Greedy vs Dynamic Programming Approach
Reuven Bar-Yehuda Gleb Polevoy Gleb Polevoy Dror Rawitz Technion Technion 1.
Parameterized Approximation Scheme for the Multiple Knapsack Problem by Klaus Jansen (SODA’09) Speaker: Yue Wang 04/14/2009.
Identifying "Good" Architectural Design Alternatives with Multi-Objective Optimization Strategies By Lars Grunske Presented by Robert Dannels.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
1 Combinatorial Dominance Analysis The Knapsack Problem Keywords: Combinatorial Dominance (CD) Domination number/ratio (domn, domr) Knapsack (KP) Incremental.
Optimization problems INSTANCE FEASIBLE SOLUTIONS COST.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
1 Combinatorial Dominance Analysis Keywords: Combinatorial Optimization (CO) Approximation Algorithms (AA) Approximation Ratio (a.r) Combinatorial Dominance.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
DATA MINING LECTURE 13 Pagerank, Absorbing Random Walks Coverage Problems.
Advanced Algorithms Analysis and Design By Syed Hasnain Haider.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
Assembly Line Balancing
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Reducing Knapsack to TSP Pasi Fränti. Knapsack problem Input: knapsack instance {2,3,5,7,11} Size of the knapsack S=15.
Polyhedral Optimization Lecture 5 – Part 3 M. Pawan Kumar Slides available online
Approximation algorithms
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Tabu Search for Solving Personnel Scheduling Problem
8.3.2 Constant Distance Approximations
Lecture on Design and Analysis of Computer Algorithm
Approximation Algorithms
Vitaly Feldman and Jan Vondrâk IBM Research - Almaden
Moran Feldman The Open University of Israel
Approximation algorithms
Priority Queues Chuan-Ming Liu
Maximum Matching in the Online Batch-Arrival Model
Robbing a House with Greedy Algorithms
Approximation Algorithms
Topic 25 Dynamic Programming
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Computability and Complexity
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Prepared by Chen & Po-Chuan 2016/03/29
The Subset Sum Game Revisited
السيولة والربحية أدوات الرقابة المالية الوظيفة المالية
Greedy Algorithm Enyue (Annie) Lu.
Coverage Approximation Algorithms
Lecture 11 Overview Self-Reducibility.
Lecture 11 Overview Self-Reducibility.
Greedy Algorithms.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Consensus Partition Liang Zheng 5.21.
Polynomial time approximation scheme
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Algorithms Lecture #21 Dr.Sohail Aslam.
Approximation Algorithms
Dynamic programming vs Greedy algo – con’t
Lecture 21 More Approximation Algorithms
4-9问题的形式化描述.
Advanced LP models column generation.
Advanced LP models column generation.
Lecture 4 Dynamic Programming
MCS 312: NP Completeness and Approximation algorthms
Lecture 5 Dynamic Programming
0-1 Knapsack problem.
Knapsack Problem A dynamic approach.
Presentation transcript:

Appropriate algorithms for Generalized Maximum Coverage 丁文韬 121220305

Definition E: A set of elements B: A set of bins Put some elements into some bins E: A set of elements B: A set of bins

Definition L: a budget Maximize ∑𝑃 under the condition ∑𝑊≤𝐿 P(b,e) P/W: for each pair (b, e), there is a profit and a weight. P(b,e) W(b,e) W(b,e) W(b,e) W’: for each bin b, there is a weight of using b. W’(b) W’(b) W’(b)

Definition Instance: Objective: 𝐿,𝐵, 𝐸,𝑃,𝑊, 𝑊 ′ 𝐴 𝑏𝑢𝑑𝑔𝑒𝑡 𝐿∈𝑁, 𝑠𝑒𝑡 𝐵, 𝑠𝑒𝑡 𝐸, 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 𝑃,𝑊 𝑖𝑠 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑜𝑛 𝐵×𝑊, ∀𝑏∈ 𝐵, 𝑒∈𝐸, 𝑃 𝑏,𝑒 >0, 𝑊 𝑏,𝑒 ≥0, 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 𝑊 ′ 𝑖𝑠 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑜𝑛 𝐵. Objective: 𝑆= 𝛽,𝜂,𝑓 , 𝛽⊆ 𝐵, 𝜂 ⊆ 𝐸, 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑓:𝜂 →𝛽. 𝑆 𝑠𝑎𝑡𝑖𝑠𝑓𝑦 𝑊 𝑆 = 𝑏∈𝛽 𝑊‘ 𝑏 + 𝑒∈𝜂 𝑊 𝑓 𝑒 ,𝑒 ≤ 𝐿. 𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑃 𝑆 = 𝑒∈𝜂 𝑃 𝑓 𝑒 ,𝑒

Previous Results Cohen R, Katzir L. The generalized maximum coverage problem[J]. Information Processing Letters, 2008, 108(1): 15-22. 1− 1 𝐾 −1 1− 𝑒 − 1 α −1 −𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 K: maximal size of enumerated sets α: constant for FPTAS of Knapsack 𝑒 𝑒−1 +ϵ 𝑓𝑜𝑟 𝑏𝑖𝑔 𝑒𝑛𝑜𝑢𝑔ℎ 𝐾, 𝑠𝑚𝑎𝑙𝑙 𝑒𝑛𝑜𝑢𝑔ℎ α

Previous Results Problem Restrictions Approx. MC* 𝑊 ′ 𝑏 =1 𝑃(𝑏,𝑒)=1 𝑊 ′ 𝑏 =1 𝑃(𝑏,𝑒)=1 e 𝑒−1 BMC* MCKP 𝑊 ′ 𝑏 =0 FPTAS GMC / e 𝑒−1 +ϵ MC KP BMC MCKP GMC (*:Elements can be selected only for a subset of the bins )

Algorithm(1) 1 while (∃𝑒 𝑛𝑜𝑡 𝑖𝑛 𝑆,add 𝑒 to S can increase 𝑃 ) 2 find 𝑏,𝐸 , 𝑊 𝑏,𝐸 +𝑊 𝑆 ≤𝐿, 𝑤𝑖𝑡ℎ max⁡( 𝑃 𝑏,𝐸 𝑊 𝑏,𝐸 ) , add (b, E) to S 3 while (∃ 𝑏,𝐸 , ∀𝑒 𝑊 𝑏,𝑒 ≤𝑊 𝑓 𝑒 ,𝑒 , 𝑃 𝑏,𝑒 >0) 4 find 𝑏,𝐸 𝑤𝑖𝑡ℎ max⁡( 𝑃(𝑏,𝐸)) , add (b, E) to S 5 return S (2) can be done with a DP alg. of knapsack problem.

Algorithm(2) Fix a bin set B 𝑊 = 𝑊(𝑏,𝑒) , a multiple choices knapsack problem MCKP is FPTAS

Algorithm Enumerate a small feasible solution 𝑆 𝐸 for Alg(1) Enumerate a small bin set 𝐵 𝐸 for Alg(2) Find balance between approximation rate and complexity 1− 1 𝐾 1− 𝑒 − 1 α −𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 K: maximal size of enumerated sets α: constant for FPTAS of Knapsack 𝑒 𝑒−1 +ϵ 𝑓𝑜𝑟 𝑏𝑖𝑔 𝑒𝑛𝑜𝑢𝑔ℎ 𝐾, 𝑠𝑚𝑎𝑙𝑙 𝑒𝑛𝑜𝑢𝑔ℎ α

Experiment results 3 Implements: Prog1 ( Alg2(MCKP) with |B|<=3, L’<=|E|^2 ) O ( B^3 * E^3 ) Prog2 ( Alg1(Greedy) with φ , L’<=|E|^3 ) O ( B^2 * E^4 ) Prog3 ( Alg1(Greedy) with |S.E|<=1, |S.B|<=1, , L’<=|E|^3 ) O ( B^3 * E^5 )

Experiment results Small test cases(*10): 3<=|B|<=6, 10<=|E|<=30, |E|^2 <= L <=|E|^4 β(Prog3) < β(Prog2) < β(Prog1) for all cases Algorithm Approx. Rate β Prog1(KP_B3_L2) 1.05~1.2 Prog2(GR_S0_L3) 1.1~1.2 Prog3(GR_S1_L2) 1.02~1.1

Experiment results Larger test cases(*15): 4<=|B|<=10, 50<=|E|<=100, |E|^2 <= L <=|E|^4 Depends on |B| B≥6 (2K) β(Prog1) > β(Prog2) for most cases B<6 β(Prog1) < β(Prog2) for most cases Algorithm Approx. Rate β Time Prog1(KP_B3_L2) 1.1~1.3 Several seconds Prog2(GR_S0_L3) Prog3(GR_S1_L2) Too slow on my PC

Experiment results Influenced by |B| and the distribution of W, P, W’ The approximation rate of Knapsack is not so important

Possible Improvement Heuristic method Useless states in Knapsack Repeated enumeration

Q & A Thanks for listening