Priority Models Sashka Davis University of California, San Diego June 1, 2003.

Slides:



Advertisements
Similar presentations
Set Cover 資工碩一 簡裕峰. Set Cover Problem 2.1 (Set Cover) Given a universe U of n elements, a collection of subsets of U, S ={S 1,…,S k }, and a cost.
Advertisements

Algorithm Design Techniques: Greedy Algorithms. Introduction Algorithm Design Techniques –Design of algorithms –Algorithms commonly used to solve problems.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Greedy Algorithms Greed is good. (Some of the time)
1 Discrete Structures & Algorithms Graphs and Trees: III EECE 320.
Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Combinatorial Algorithms
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Chapter 3 The Greedy Method 3.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CSE 421 Algorithms Richard Anderson Dijkstra’s algorithm.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Backtracking.
A General Approach to Online Network Optimization Problems Seffi Naor Computer Science Dept. Technion Haifa, Israel Joint work: Noga Alon, Yossi Azar,
Grace Hopper Celebration of Women in Computing Evaluating Algorithmic Design Paradigms Sashka Davis Advised by Russell Impagliazzo UC San Diego October.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
Data Structures and Algorithms Graphs Minimum Spanning Tree PLSD210.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
SPANNING TREES Lecture 21 CS2110 – Spring
Yossi Azar Tel Aviv University Joint work with Ilan Cohen Serving in the Dark 1.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Introduction to Job Shop Scheduling Problem Qianjun Xu Oct. 30, 2001.
Fundamentals of Algorithms MCS - 2 Lecture # 7
Data Structures and Algorithms A. G. Malamos
Approximation Algorithms
Discrete Structures Lecture 12: Trees Ji Yanyan United International College Thanks to Professor Michael Hvidsten.
EMIS 8374 Optimal Trees updated 25 April slide 1 Minimum Spanning Tree (MST) Input –A (simple) graph G = (V,E) –Edge cost c ij for each edge e 
Introduction to Algorithms Jiafen Liu Sept
Lectures on Greedy Algorithms and Dynamic Programming
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006.
Minimum Spanning Trees CS 146 Prof. Sin-Min Lee Regina Wang.
A Optimal On-line Algorithm for k Servers on Trees Author : Marek Chrobak Lawrence L. Larmore 報告人:羅正偉.
SPANNING TREES Lecture 20 CS2110 – Fall Spanning Trees  Definitions  Minimum spanning trees  3 greedy algorithms (incl. Kruskal’s & Prim’s)
Vasilis Syrgkanis Cornell University
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
Introduction to NP Instructor: Neelima Gupta 1.
Models of Greedy Algorithms for Graph Problems Sashka Davis, UCSD Russell Impagliazzo, UCSD SIAM SODA 2004.
Approximation Algorithms Greedy Strategies. I hear, I forget. I learn, I remember. I do, I understand! 2 Max and Min  min f is equivalent to max –f.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Algorithm Design and Analysis June 11, Algorithm Design and Analysis Pradondet Nilagupta Department of Computer Engineering This lecture note.
Midwestern State University Minimum Spanning Trees Definition of MST Generic MST algorithm Kruskal's algorithm Prim's algorithm 1.
CSCE 411 Design and Analysis of Algorithms
Greedy Technique.
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
Autumn 2016 Lecture 11 Minimum Spanning Trees (Part II)
Minimum-Cost Spanning Tree
CSCE350 Algorithms and Data Structure
Unit 3 (Part-I): Greedy Algorithms
Autumn 2015 Lecture 11 Minimum Spanning Trees (Part II)
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Advanced Analysis of Algorithms
Minimum Spanning Tree Algorithms
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Spanning Trees Lecture 20 CS2110 – Spring 2015.
Minimum-Cost Spanning Tree
More Graphs Lecture 19 CS2110 – Fall 2009.
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Priority Models Sashka Davis University of California, San Diego June 1, 2003

2 Goal Define priority models, which are a formal framework of greedy algorithms Develop a technique for proving lower bounds for priority algorithms

3 The big picture Divide and Conquer Greedy Dynamic Programming Hill-climbing Polynomial Time Algorithms Can we build a formal model for the different algorithmic design paradigms? Evaluate the limitations of each technique? Classify the kinds of problems on which the different heuristics perform well Are the known algorithms optimal or can they be improved?

4 Greedy heuristics Priority algorithms are a formal model for greedy algorithms. ADAPTIVE FIXED ShortPath PRIORITY ALGORITHMS

5 Common structure of greedy algorithms They sort items (edges, intervals, etc.) Consider each item once and either add it to the solution or throw it away

6 Interval scheduling on a single machine Instance: Set of intervals I=(i 1, i 2,…,i n ),  j i j =[r j, d j ] Problem: schedule intervals on a single machine Solution: S  I Objective function: max i  S  (d j - r j )

7 A simple solution (LPT) Longest Processing Time algorithm input I=(i 1, i 2,…,i n ) 1. Initialize S ←  2. Sort the intervals in decreasing order (d j – r j ) 3. while (I is not empty) Let i k be the next in the sorted order If i k can be scheduled then S ← S U {i k }; I ← I \ {i k } 4. Output S

8 LPT is a 3-approximation LPT sorts the intervals in decreasing order according to their length 3 LPT ≥ OPT OPT LPT riri didi

9 The minimum cost spanning tree problem Instance: Edge weighted graph Problem: Find a tree of edges that spans V Objective function Minimize the cost of the tree

10 A solution for MST problem Kruskal’s algorithm Input (G=(V,E), w: E →R) 1. Initialize empty solution T 2. L = Sorted list of edges in increasing order according to their weight 3. while (L is not empty)  e = next edge in L  Add the edge to T, as long as T remains a forest and remove e from L 4. Output T

11 Another solution to the MST problem Prim’s algorithm Input G=(V,E), w: E →R 1. Initialize an empty tree T ←  ; S ←  2. Pick a vertex u; S={u}; 3. for (i=1 to |V|-1)  (u,v) = min (u,v)  cut(S, V-S) w(u,v)  S←S  {v}; T←T  {(u,v)} 4. Output T

12 Classification of the example algorithms ADAPTIVE FIXED Prim PRIORITY ALGORITHMS Kruskal LPT

13 Talk outline 1. History of priority algorithms 2. Priority algorithm framework for scheduling problems 3. Priority algorithms for facility location 4. General framework of priority algorithms 5. Future research

14 Results [BNR02] Defined fixed and adaptive priority algorithms Proved that fixed priority algorithms are less powerful than adaptive priority algorithms Considered variety of scheduling problems and proved many non-trivial upper and lower bounds

15 Results [AB02] Proved tight bounds on performance of Adaptive priority algorithms for facility location in arbitrary spaces Fixed priority algorithms for uniform metric facility location Adaptive priority algorithms for set cover

16 Results [DI02] Defined a general model of priority algorithms Proved a strong separation between the classes of fixed and adaptive priority algorithms Proved a separation between the class of memoryless adaptive priority algorithms and adaptive priority algorithms with memory Proved tight bound for performance of adaptive priority algorithms for weighted vertex cover problem

17 Talk outline 1. History of priority algorithms 2. Priority algorithm framework for scheduling problems [BNR02] 3. Priority algorithms for facility location 4. General framework of priority algorithms 5. Future research

18 The defining characteristics of greedy algorithms [BNR02] The order in which data items are considered is determined by a priority function, which orders all possible data items The algorithm sees one input at a time Decision made for each data item is irrevocable

19 Priority models Difference: How the next data item is chosen Fixed Priority Algorithms Adaptive Priority Algorithms

20 Fixed priority algorithms [BNR02] Input: a set of jobs S Ordering: Determine without looking at S a total ordering of all possible jobs while (S is not empty) J next - next job in S according to the order above Decision: Make irrevocable decision for J next S ← S \ {J next }

21 Adaptive priority algorithms [BNR02] Input: a set of jobs S while (not empty S) Ordering: Determine without looking at S a total ordering of all possible jobs J next - next job in S according to the order above Decision: Make irrevocable decision for J next S ← S \ {J next }

22 Separation between fixed and adaptive priority algorithms [BNR02] Theorem: No fixed priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem on multiple machine configuration Theorem: CHAIN-2 is an adaptive priority algorithm achieving an approximation ratio 2, for interval scheduling on a two machine configuration

23 Online algorithms 1. Must service each request before the next request is received 2. Several alternatives in servicing each request 3. Online cost is determined by the options selected

24 Connection between online and priority algorithms Similarities  The instance is viewed one input at a time  Decision is irrevocable Difference  The order of the data items

25 Competitive analysis of online algorithms t=1; I= ∅ Round t  Adversary picks a data item  t ; I ← I U {  t }  Algorithm makes a decision σ t for  t : A ← A U { (  t, σ t )}  Adversary chooses whether to end the game. If not, the next round begins: t ←t+1 Adversary picks a solution B for I offline Algorithm is awarded payoff value(A) / value(B)

26 Fixed priority game Adversary selects a finite set of data items S 0 ; I←  ;t ←1 Algorithm picks a total order on S 0 Adversary restricts the remaining data items: S 1  S 0 Round t  Let  t  S t be next data item in the order  Algorithm makes a decision σ t for  t : A ← A U { (  t, σ t ) }  Adversary restricts the set S t+1  S t –  t ; I ← I U {  t };  Adversary chooses whether to end the game. If not, the next round begins: t ←t+1 Adversary picks a solution B for I Algorithm is awarded payoff value (A) / value (B)

27 Example lower bound [BNR02] Theorem1: No priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration

28 Proof of Theorem 1 Adversary’s move Algorithm’s move: Algorithm selects an ordering Let i be the interval with highest priority q q e

29 Adversary’s strategy If Algorithm decides not to schedule i During next round Adversary removes all remaining intervals and schedules interval i i jk1 2 3 i

30 Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: i jk1 2 3 i i kj

31 Adversary’s strategy If i = and Algorithm schedules i During next round Adversary restricts the sequence: m jk1 2 3 i i m

32 Conclusion Adversary can pick (q, e) so that the advantage gained is arbitrarily close to 3 No priority algorithm (fixed or adaptive) can achieve an approximation ratio better than 3 LPT achieves an approximation ratio 3 LPT is optimal within the class of priority algorithms

33 Talk outline 1. History of priority algorithms 2. Priority algorithm framework for scheduling problems 3. Priority algorithms for facility location [AB02] 4. General framework of priority algorithms 5. Future research

34 [AB02] work on priority algorithms [AB02] proved lower bounds on performance of adaptive and fixed priority algorithms for the facility location problem in metric and arbitrary spaces, and the set cover problems

35 [AB02] result Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces

36 Adaptive priority game Adversary selects a finite set of data items S 0 ; I←  ;t ←1 Round t  Algorithm picks a data item  t and a decision σ t for  t : A ← A U { (  t, σ t ) }  Adversary restricts the set S t+1  S t –  t ; I ← I U {  t };  Adversary chooses whether to end the game, if not next round begins t ← t+1 Adversary picks a solution B for I Algorithm is awarded payoff value(A)/value(B)

37 Facility location problem Instance is a set of cities and set of facilities  The set of cities is C={1,2,…,n}  Each facility f i has an opening cost cost(f i ) and connection costs for each city: {c i1, c i2,…, c in } Problem: open a collection of facilities such that each city is connected to at least one facility Objective function: minimize the opening and connection costs min(Σ f  S cost(f i ) + Σ j  C min fi  S c ij )

38 Adversary presents the instance: Cities: C={1,2,…,n}, where n=2 k Facilities:  Each facility has opening cost n  City connection costs are 1 or ∞  Each facility covers exactly n/2 cities  cover(f j ) = {i | i  C,c ji =1} C u denotes the set of cities not yet covered by the solution of the Algorithm

39 Adversary’s strategy At the beginning of each round t  The Adversary chooses S t to consist of facilities f such that f  S t iff |cover(f) ∩ C u | = n/(2 t )  The number of uncovered cities C u is n/(2 t-1 ) Two facilities are complementary if together they cover all cities in C. For any round t S t consists of complementary facilities

40 The game Uncovered cities C u

41 End of the game Either Algorithm opened log(n) facilities or failed to produce a valid solution Cost of Algorithm’s solution is n.log(n)+n Adversary opens two facilities incurs total cost 2n+n

42 Conlusion The Adversary has a winning strategy No adaptive priority algorithm can achieve an approximation ratio better than log(n)

43 Talk outline 1. History of priority algorithms 2. Priority algorithm framework for scheduling problems 3. Priority algorithms for facility location 4. General framework of priority algorithms [DI02] 5. Future research

44 Fixed priority algorithms [DI02] Input: instance  ={  1,  2,…,  n }, Output: solution 1. Determine an ordering function 2. Order I according to 3. Repeat  Let the next data item in the ordering be  Make a decision  Update the partial solution S until (decisions are made for all data items) 4. Output

45 Adaptive priority algorithms [DI02] Input: instance  ={  1,  2,…,  n }, Output: solution 1. Initialization 2. Repeat Determine an ordering function Pick the highest priority data item according to Make an irrevocable decision Update: until (decisions are made for all data items; ) 3. Output

46 Strong separation between fixed and adaptive priority algorithms Theorem: No fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem Dijkstra algorithm for the Single source shortest path problem solves the ShortPath problem exactly.

47 The ShortPath problem Instance: Given an edge-weighted directed graph G=(V,E) and two nodes s and t Problem: Find a directed tree of edges, rooted at s Objective function: Minimize the combined weight of the edges on the path from s to t

48 Adversary’s strategy t b s a u(k) w(k) x(1) v(1) y(1) z(1)

49 Algorithm selects an order on S 0 If then the Adversary presents: t b s a u(k) w(k) x(1) v(1) y(1) z(1)

50 Now we play the game in the scenario: t b s a u(k) x(1) y(1) z(1)

51 Adversary’s strategy Wait until Algorithm considers edge Y(1). Y(1) will be considered before Z(1) Adversary can remove data items not yet considered

52 Case 1: y(1) is taken t u(k) x(1) y(1) z(1) b a s The Algorithm constructs a path {u,y} The Adversary outputs solution {x,z}

53 Case 2: y(1) is rejected The Algorithm has failed to construct a path. The Adversary outputs a solution {u,y} and wins the game. t u(k) x(1) y(1) z(1) b a s

54 The outcome of the game: Algorithm fails to output a solution or Algorithm achieves an approximation ratio (k+1)/2 The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio

55 Conclusion No fixed priority algorithm can achieve any constant approximation ratio Dijkstra’s algorithm for the SSSP can be classified as an adaptive priority algorithm problem solves the ShortPath problem exactly

56 Future work Improve upper and lower bounds for Priority algorithms Define extended models of priority algorithms Beyond greedy algorithms

57 Close the gaps Metric Steiner Tree problem:  The known 2-approximation belongs to the class of fixed priority algorithms  Current lower bound for adaptive priority algorithm is 1.18, for space with distances {1,2} Can we close the gap? Lower bound for metric space with arbitrary distances?

58 Priority algorithms for other problems What kind of lower bounds can we prove for weighted independent set problem? What kind of lower bounds can we prove for graph coloring?

59 Extended priority models Global information: Suppose the algorithm knows the length of the instance ( |V| or |E| or number of jobs, etc.) There are ‘greedy algorithms’ that use this information What kinds of lower bounds can we prove for this model?

60 More extensions of the model Local information - information encoded in a single data item What if the algorithm is allowed to see the neighborhood of the current vertex? Or the two highest priority jobs, in case of job scheduling problems? What kinds of lower bounds can we prove for this model?

61 Beyond greedy Define similar framework for backtracking and dynamic programming algorithms What are the limits of these techniques?

62 Defining characteristics of backtracking algorithms Backtracking algorithms build a depth-first search pruning tree Leaves of the search tree are solutions Children of internal node represent the choices for a given data item

63 nnnnn Fixed backtracking algorithms 22 12n 1

64 Fixed backtracking algorithms Algorithm orders the universe of data items Decision is irrevocable, algorithm commits to a set of options for each data item Want to relate the quality of the solution with the fraction of leaves inspected