Speaker: Yoni Rozenshein Instructor: Prof. Zeev Nutov.

Slides:



Advertisements
Similar presentations
Approximation algorithms for geometric intersection graphs.
Advertisements

Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
NP-Completeness.
Approximation Algorithms
Generalization and Specialization of Kernelization Daniel Lokshtanov.
Chapter 5 Fundamental Algorithm Design Techniques.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation Algorithms for TSP
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Minimum Spanning Trees Kun-Mao Chao ( 趙坤茂 ) Department of Computer Science and Information Engineering National Taiwan University, Taiwan
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
Polynomial-Time Approximation Schemes for Geometric Intersection Graphs Authors: T. Erlebach, L. Jansen, and E. Seidel Presented by: Ping Luo 10/17/2005.
Approximation Algorithms1. 2 Outline and Reading Approximation Algorithms for NP-Complete Problems (§13.4) Approximation ratios Polynomial-Time Approximation.
1 Approximation Algorithms CSC401 – Analysis of Algorithms Lecture Notes 18 Approximation Algorithms Objectives: Typical NP-complete problems Approximation.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Optimization problems INSTANCE FEASIBLE SOLUTIONS COST.
Algoritmi on-line e risoluzione di problemi complessi Carlo Fantozzi
(work appeared in SODA 10’) Yuk Hei Chan (Tom)
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Approximation Algorithms
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
APPROXIMATION ALGORITHMS VERTEX COVER – MAX CUT PROBLEMS
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Approximation Algorithms
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University Local Search, Greedy and Partitioning
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Although this may seem a paradox, all exact science is dominated by the idea of approximation. Bertrand Russell Approximation Algorithm.
Unit 9: Coping with NP-Completeness
Projects Network Theory VLSI PSM 1. Network 1. Steiner trees
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
Instructor Neelima Gupta Instructor: Ms. Neelima Gupta.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Models of Greedy Algorithms for Graph Problems Sashka Davis, UCSD Russell Impagliazzo, UCSD SIAM SODA 2004.
1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
Clustering Data Streams A presentation by George Toderici.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Input-Dependent and Asymptotic Approximation. Summary -Approximation algorithm for graph coloring -Approximation algorithm for set cover -Asymptotic approximation.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
8.3.2 Constant Distance Approximations
Approximation Algorithms
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
Hamiltonian Cycle and TSP
Hamiltonian Cycle and TSP
Approximation algorithms
Approximation Algorithms
Approximation Algorithms
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Coverage Approximation Algorithms
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Ch09 _2 Approximation algorithm
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Instructor: Aaron Roth
Presentation transcript:

Speaker: Yoni Rozenshein Instructor: Prof. Zeev Nutov

Talk outline Problem description and known approximations Interpretation in graphs (Independent Set Problem) The greedy local search method Approximation ratio Polynomial time implementation 1

Weighted k -set packing Given a collection of weighted sets of size at most k, find a maximum weight collection of disjoint sets Example with k = 3 : SetWeight { 4, 5 }16 { 2, 3, 5 }6 { 3, 4, 5 }12 { 1, 5 }5 { 4 }14 { 4, 5 } { 2, 3, 5 } { 4 } { 3, 4, 5 } { 1, 5 } { 4 } Set packing 2 Weight

Hardness of the problem For k = 2 we get the Maximum (Weighted) Matching Problem, which admits a polynomial time algorithm [Edmonds 1965] The problem is NP-hard for k ≥ 3 [Karp 1972] Reducible to k -SAT k = 3 : 3 -Dimensional Matching 3

Approximation algorithms OPT(I) – weight of the best solution (on instance I) ALG(I) – weight of a given algorithm’s solution Algorithm’s approximation ratio on instance I: Algorithm’s approximation ratio: We seek an algorithm that minimizes the ratio 4

The greedy algorithm Repeatedly choose maximum-weight set S and delete from the family all sets that intersect S Very fast; performance ratio k : SetWeight { 1, 2, 3, …, k }1 + ε { 1 }1 { 2 }1 …… { k }1 Greedy choice Intersects... AA, A 1, A 2, …, A n BB, B 1, B 2, …, B m …… ZZ, Z 1, Z 2, …, Z l 5

Local search heuristic Replace a subset of the solution with a better subset Repeat until “locally optimal” How are the improvements chosen? Polynomial running time? Performance ratio k – 1 + ε [Arkin and Hassin 1997] Can we improve on these ratios? 6

Interpretation in graphs The sets’ intersection graph: Nodes correspond to sets Edges correspond to sets sharing elements Set packing is a particular case of Maximum Weight Independent Set in intersection graphs What characterizes intersection graphs arising from k -set packing instances? SetWeight A = { 4, 5 }16 B = { 2, 3, 5 }6 C = { 3, 4, 5 }12 D = { 1, 5 }5 E = { 4 }14 A B E D C 7

The characterization The graph is k + 1 -claw free From now on, we consider the “Independent Set Problem in k + 1 -claw free graphs” 1 2 k + 1 Example with k = 3 8 k + 1-claw Proof: a.At most k elements in parent node b.At least one in each child node c.Pigeon-hole principle Proof: a.At most k elements in parent node b.At least one in each child node c.Pigeon-hole principle

Approach: Greedy local search Previous algorithms combined How are improvements chosen? Polynomial running time? G REEDY -L OCAL -S EARCH (G) I ← G REEDY (G) while I is not locally optimal do I’ ← local improvement of I I ← I’ end while output I G REEDY -L OCAL -S EARCH (G) I ← G REEDY (G) while I is not locally optimal do I’ ← local improvement of I I ← I’ end while output I 9

Local improvement scheme Improvement: Pick some v I, add some of v’s neighbors, and delete any interfering nodes The payoff factor is 10 A B E D C Local Improvement Pick B Add D Delete A, B A B E D C Pick B Add D Delete A, B Pick B Add D Delete A, B A B E D C

First algorithm: A NY I MP Polynomial running time? For now, we will analyze the approximation ratio only A NY I MP (G, α) I ← G REEDY (G) while I is not locally optimal do I’ ← any improvement of I with payoff factor ≥ α I ← I’ end while output I A NY I MP (G, α) I ← G REEDY (G) while I is not locally optimal do I’ ← any improvement of I with payoff factor ≥ α I ← I’ end while output I 11

Analysis of the approximation ratio Projection: “Distance” between I and OPT proj(I) and w(I) maintain equilibrium 12 proj(I)w(I)w(I)Φ(I) ≈ x ⋅ proj(I) + y ⋅ w(I) min Φ(I)

Projection A B E D C A B E D C 13

Projection properties Equilibrium property: Local optimum property: Result: (for any intermediate I) (for the final I) 14

Second algorithm: B EST I MP B EST I MP (G) I ← G REEDY (G) while I is not locally optimal do I’ ← an improvement of I with the highest payoff factor I ← I’ end while output I B EST I MP (G) I ← G REEDY (G) while I is not locally optimal do I’ ← an improvement of I with the highest payoff factor I ← I’ end while output I 15

Potential function In A NY I MP ’s potential*, d was constant (I i, d i ): Sub-sequence of local improvements Largest such that d i (payoff factor) is descending: 2, 5, 3, 2, 4, 1.5, 3, 2, 1.2, 4, 3, * d 1 = 2d 2 = 1.5d 3 = 1.2d 4 = 1.1 (Greedy) I 0 I1I1 I2I2 I3I3 I 4 (Final)

Potential properties Equilibrium property: Weight evaluation property: Result: (for i = 1, 2, …) (for the final I ) 17

Running time analysis Reminder: Individual steps run in polynomial time Number of improvements? G REEDY -L OCAL -S EARCH (G) I ← G REEDY (G) while I is not locally optimal do I’ ← local improvement of I I ← I’ end while output I G REEDY -L OCAL -S EARCH (G) I ← G REEDY (G) while I is not locally optimal do I’ ← local improvement of I I ← I’ end while output I 18

Polynomial-time approximation scheme Given an algorithm with approximation ratio ρ, produce a polynomial time algorithm with approximation ratio ρ + ε Well-known example: Knapsack The running time may depend strongly on ε (For example: Polynomial in 1/ ε ) Our greedy local search algorithm already runs in pseudo-polynomial time 19

PTAS: Weight scaling Each improvement increases weight by 1 Number of improvements at most: 20

Weight scaling – cont’d Special case: If we get within r of OPT'', we get within r(1 – ε) of OPT 21

22 Questions?