Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006.

Slides:



Advertisements
Similar presentations
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Advertisements

Greedy Algorithms.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Greedy Algorithms Greed is good. (Some of the time)
1 Discrete Structures & Algorithms Graphs and Trees: III EECE 320.
Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Minimum Spanning Trees Definition Two properties of MST’s Prim and Kruskal’s Algorithm –Proofs of correctness Boruvka’s algorithm Verifying an MST Randomized.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Combinatorial Algorithms
Optimization of Pearl’s Method of Conditioning and Greedy-Like Approximation Algorithm for the Vertex Feedback Set Problem Authors: Ann Becker and Dan.
Chapter 3 The Greedy Method 3.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CSE 421 Algorithms Richard Anderson Dijkstra’s algorithm.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Priority Models Sashka Davis University of California, San Diego June 1, 2003.
Grace Hopper Celebration of Women in Computing Evaluating Algorithmic Design Paradigms Sashka Davis Advised by Russell Impagliazzo UC San Diego October.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Approximation Algorithms
Assignment 4. (Due on Dec 2. 2:30 p.m.) This time, Prof. Yao and I can explain the questions, but we will NOT tell you how to solve the problems. Question.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each decision is locally optimal. These.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 8 Greedy Graph.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
Yossi Azar Tel Aviv University Joint work with Ilan Cohen Serving in the Dark 1.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
Minimum Spanning Trees
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
Approximation Algorithms
알고리즘 설계 및 분석 Foundations of Algorithm 유관우. Digital Media Lab. 2 Chap4. Greedy Approach Grabs data items in sequence, each time with “best” choice, without.
EMIS 8374 Optimal Trees updated 25 April slide 1 Minimum Spanning Tree (MST) Input –A (simple) graph G = (V,E) –Edge cost c ij for each edge e 
Introduction to Algorithms Jiafen Liu Sept
Lectures on Greedy Algorithms and Dynamic Programming
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
Vasilis Syrgkanis Cornell University
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
Models of Greedy Algorithms for Graph Problems Sashka Davis, UCSD Russell Impagliazzo, UCSD SIAM SODA 2004.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Approximation Algorithms Greedy Strategies. I hear, I forget. I learn, I remember. I do, I understand! 2 Max and Min  min f is equivalent to max –f.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Algorithm Design and Analysis June 11, Algorithm Design and Analysis Pradondet Nilagupta Department of Computer Engineering This lecture note.
Midwestern State University Minimum Spanning Trees Definition of MST Generic MST algorithm Kruskal's algorithm Prim's algorithm 1.
Minimum Spanning Trees
CSCE 411 Design and Analysis of Algorithms
Greedy Technique.
Autumn 2016 Lecture 11 Minimum Spanning Trees (Part II)
Minimum-Cost Spanning Tree
CSCE350 Algorithms and Data Structure
Minimum Spanning Trees
Autumn 2015 Lecture 11 Minimum Spanning Trees (Part II)
Chapter 23 Minimum Spanning Tree
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Minimum-Cost Spanning Tree
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006

Suppose you have to solve a problem Π… Is there a Greedy algorithm that solves Π? Is there a Backtracking algorithm that solves Π? Is there a Dynamic Programming algorithm that solves Π? Eureka! I have a DP Algorithm! No Backtracking agl. exists? Or I didn’t think of one? Is my DP algorithm optimal or a better one exists? No Greedy alg. exists? Or I didn’t think of one?

Suppose we a have formal model of each algorithmic paradigm Is there a Greedy algorithm that solves Π? No Greedy algorithm can solve Π exactly. Is there a Backtracking algorithm that solves Π? No Backtracking algorithm can solve Π exactly. Is there a Dynamic Programming alg. that solves Π? DP helps! Is my algorithm optimal, or a better DP algorithm exists? Yes, it is! Because NO DP alg. can solve Π more efficiently.

The goal To build a formal model of each of the basic algorithmic design paradigms which should capture the strengths of the paradigm. To develop lower bound technique, for each formal model, that can prove negative results for all algorithms in the class.

Using the framework we can answer the following questions 1. When solving problems exactly: What algorithmic design paradigm can help? No algorithm within a given formal model can solve the problem exactly. We find an algorithm that fits a given formal model. 2. Is a given algorithm optimal? Prove a lower bound matching the upper bound for all algorithms in the class. 3. Solving the problems approximately: What algorithmic paradigm can help? Is a given approximation scheme optimal within the formal model?

Some of our results ADAPTIVE PRIORITY FIXED Greedy Backtracking & Simple DP (tree) Backtracking & Simple DP (tree) Dynamic Programming Dynamic Programming pBT pBP Online

 is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Objects arrive in worst case order chosen by adversary. 2. Loop considering  i in order. –Make a irrevocable decision  i   On-line algorithms

 is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Algorithm chooses fixed π :  →R + without looking at I. 2. Loop considering  i in order. –Make a irrevocable decision  i   Fixed priority algorithms

 is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   2. Loop - Order: Algorithm reorders π :  →R + without looking at rest of I. - Considering next  i in current order. –Make a irrevocable decision  i   Adaptive priority algorithms

 is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Algorithm chooses π :  →R + without looking at I. 2. Loop considering  i in order. –Make a set of decisions  i   (one of which will be the final decision.) Fixed priority “Back Tracking”

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Shortest Path in negative graphs no cycles Bellman-Ford Shortest Path in no-negative graphs Dijkstra’s Online Maximum Matching in Bipartite graphs Flow Algorithms Maximum Matching in Bipartite graphs Minimum Spanning Tree Prim’s Kruskal’s

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online Prim’s Kruskal’s Minimum Spanning Tree Kruskal’s

Kruskal algorithm for MST is a Fixed priority algorithm Input (G=(V,E), ω: E →R) 1.Initialize empty solution T 2.L = Sorted list of edges in non-decreasing order according to their weight 3.while (L is not empty) –e = next edge in L –Add the edge to T, as long as T remains a forest and remove e from L 4.Output T

Prim’s algorithm Input G=(V,E), w: E →R 1.Initialize an empty tree T ←  ; S ←  2.Pick a vertex u; S={u}; 3.for (i=1 to |V|-1) –(u,v) = min (u,v)  cut(S, V-S) w(u,v) –S←S  {v}; T←T  {(u,v)} 4.Output T Prims algorithm for MST is an adaptive priority algorithm

Dijkstra’s Shortest Paths Alg is an adaptive priority algorithm Dijkstra algorithm (G=(V,E), s  V) T← ∅ ; S←{s}; Until (S≠V) Find e=(u,x) | e = mine  Cut(S, V-S){path(s, u)+ω(e)} T← T  {e}; S ← S  {x}

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online Prim’s Kruskal’s Minimum Spanning Tree Kruskal’s

ShortPath Problem: Given a graph G=(V,E), ω: E →R + ; s, t  V. Find a directed tree of edges, rooted at s, such that the combined weight of the path from s to t is minimal Theorem: No Fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem Some of our results Data items are edges of the graph Decision options = {accept, reject}

Fixed priority game SolverAdversary γdγd γiγi γ3γ3 γjγj γkγk γ2γ2 γ1γ1 Γ0Γ0 S_sol = {(γ i2,σ i2 )} σ i2 S_sol = {(γ i2,σ i2 ), (γ i4,σ i4 )} γ i2 γ i9,…γ i1 γ i3 γ i4 γ i5 γ i6 γ i7 γ i8 Γ0Γ0 Γ1Γ1 Γ2Γ2 σ i4 Γ3Γ3 End Game S_adv = {(γ i2,σ * i2 ), (γ i4,σ * i4 )} Solver is awarded =∅

Adversary selects  0 t b s a u(k) w(k) x(1) v(1) y(1) z(1)

Solver selects an order on  0 If then the Adversary presents: t b s a u(k) w(k) x(1) v(1) y(1) z(1)

Adversary’s strategy Waits until Solver considers edge y(1) Solver will consider y(1) before z(1) Event 1 σ y =accept Event 2 σ y =reject

Event 1: Solver accepts y(1) t u(k) x(1) y(1) z(1) b a s The Solver constructs a path {u,y} The Adversary outputs solution {x,z}

Event 2: Solver rejects y(1) The Solver fails to construct a path. The Adversary outputs a solution {u,y}. t u(k) x(1) y(1) z(1) b a s

The outcome of the game: The Solver either fails to output a solution or achieves an approximation ratio (k+1)/2 The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Interval Scheduling value is width Factor of 3 Online Factor of 3

Interval scheduling on a single machine Instance: Set of intervals I=(i 1, i 2,…,i n ),  j i j =[r j, d j ] Problem: schedule intervals on a single machine Solution: S  I Objective function: maximize  i  S (d j - r j )

A simple solution (LPT) Longest Processing Time algorithm input I=(i 1, i 2,…,i n ) 1.Initialize S ←  2.Sort the intervals in decreasing order (d j – r j ) 3.while (I is not empty) Let i k be the next in the sorted order If i k can be scheduled then S ← S U {i k }; I ← I \ {i k } 4.Output S

LPT is a 3-approximation LPT sorts the intervals in decreasing order according to their length 3 LPT ≥ OPT OPT LPT riri didi

Example lower bound [BNR02] Theorem1: No adaptive priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration

Proof of Theorem 1 Adversary’s move Algorithm’s move: Algorithm selects an ordering Let i be the interval with highest priority q q e

Adversary’s strategy If Algorithm decides not to schedule i During next round Adversary removes all remaining intervals and schedules interval i i jk1 2 3 i Alg’s value = 0 Adv’s value = i

Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: i jk1 2 3 i i i+1i-1 Alg’s value = i Adv’s value = (i-1)+3(i/3)+(i+1)=3i

Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: 1 Alg’s value = 1 Adv’s value = 3(1/3)+(2)= i jk

Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: i jk1 2 3 q q q-1 Alg’s value = q Adv’s value = (q-1)+3(q/3)+(q-1)=3q-1 But q is big

Adversary’s strategy If i = and Algorithm schedules i During next round Adversary restricts the sequence: m jk1 2 3 i i m Alg’s value = i Adv’s value = (3i) =3i

PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Interval Scheduling value is width Factor of 3 Online Factor of 3 The algorithm was missed up before it got a chance to reorder things. ? Some of our results

PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Weighted Vertex Cover Factor of 2 Online

[Joh74] greedy 2-approximation for WVC Input: instance G with weights on nodes. Output: solution S  V covers all edges and minimizes weight taken nodes. Repeat until all edges covered. Take v minimizing ω(v)/(# uncovered adj edges) Weighted Vertex Cover

With Shortest Path, a data item is an edge of the graph –  = (, ω( ) ) With weighted vertex cover, –A data item is a vertex of the graph  = (v, ω(v), adj_list(v)) (Stronger than having the items be edges, because the alg gets more info from nodes. Weighted Vertex Cover Theorem: No Adaptive priority algorithm can achieve an approximation ration better than 2

Adaptive priority game SolverAdversary γ3γ3 γ5γ5 γ6γ6 γ1γ1 γ4γ4 γ7γ7 γ2γ2 S_sol = {(γ 7,σ 7 )} σ4σ4 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 )} Γ3Γ3 Γ1Γ1 Γ2Γ2 σ7σ7 The Game Ends : 1.S_adv = {(γ 7,σ * 7 ), (γ 4,σ * 4 ),(γ 2,σ * 2 )} 2.Solver is awarded payoff f(S_sol)/f(S_adv) γ8γ8 γ9γ9 γ 10 γ 11 γ 12 Γ0Γ0 σ2σ2 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 ),(γ 2,σ 2 )} 

The Adversary chooses instances to be graphs K n,n The weight function ω:V→ {1, n 2 } n2n2 1n2n2 n2n2 n2n2 11 1

The game Data items –each node appears in  o as two separate data items with weights 1, n 2 Solver moves –Choses a data item, and commits to a decision Adversary move –Removes from the next  t the data item, corresponding to the node just committed and..

Adversary’s strategy is to wait unitl Event 1: Solver accepts a node of weight n 2 Event 2: Solver rejects a node of any weight Event 3: Solver has committed to all but one nodes on either side of the bipartite

Event 1: Solver accepts a node ω(v)=n 2 The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is at least n 2 +n n2n

Event 2: Solver rejects a node of any weight The Adversary chooses part A of the bipartite as a cover. The Solver must choose part B of the bipartite as a cover. n2n2 n2n2

Event 3: Solver commits to n-1 nodes w(v)=1, on either side of K n,n The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is 2n n2n2

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Weighted Vertex Cover Factor of 2 Online

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Facility Location Factor of logn Online

Facility location problem Instance is a set of cities and set of facilities –The set of cities is C={1,2,…,n} –Each facility f i has an opening cost cost(f i ) and connection costs for each city: {c i1, c i2,…, c in } Problem: open a collection of facilities such that each city is connected to at least one facility Objective function: minimize the opening and connection costs min(Σ f  S cost(f i ) + Σ j  C min fi  S c ij )

[AB02] result Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces

Adversary presents the instance: Cities: C={1,2,…,n}, where n=2 k Facilities: –Each facility has opening cost n –City connection costs are 1 or ∞ –Each facility covers exactly n/2 cities –cover(f j ) = {i | i  C,c ji =1} C u denotes the set of cities not yet covered by the solution of the Algorithm

Adversary’s strategy At the beginning of each round t –The Adversary chooses S t to consist of facilities f such that f  S t iff |cover(f) ∩ C u | = n/(2 t ) –The number of uncovered cities C u is n/(2 t-1 ) Two facilities are complementary if together they cover all cities in C. For any round t S t consists of complementary facilities

The game Uncovered cities C u

End of the game Either Algorithm opened log(n) facilities or failed to produce a valid solution Cost of Algorithm’s solution is n.log(n)+n Adversary opens two facilities incurs total cost 2n+n

Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Facility Location Factor of logn Online