11 -1 Chapter 12 On-Line Algorithms. 11 -2 On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.

Slides:



Advertisements
Similar presentations
Chapter 4 Partition I. Covering and Dominating.
Advertisements

Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Great Theoretical Ideas in Computer Science
Covers, Dominations, Independent Sets and Matchings AmirHossein Bayegan Amirkabir University of Technology.
1 SOFSEM 2007 Weighted Nearest Neighbor Algorithms for the Graph Exploration Problem on Cycles Eiji Miyano Kyushu Institute of Technology, Japan Joint.
Comments We consider in this topic a large class of related problems that deal with proximity of points in the plane. We will: 1.Define some proximity.
Greedy Algorithms Greed is good. (Some of the time)
Great Theoretical Ideas in Computer Science for Some.
Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Combinatorial Algorithms
Advanced Topics in Algorithms and Data Structures Lecture 7.1, page 1 An overview of lecture 7 An optimal parallel algorithm for the 2D convex hull problem,
Chapter 3 The Greedy Method 3.
Great Theoretical Ideas in Computer Science.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Online Algorithms – II Amrinder Arora Permalink:
Data Transmission and Base Station Placement for Optimizing Network Lifetime. E. Arkin, V. Polishchuk, A. Efrat, S. Ramasubramanian,V. PolishchukA. EfratS.
Approximation Algorithms
Discussion #36 Spanning Trees
Branch and Bound Searching Strategies
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Ecole Polytechnque, Nov 7, 2007 Scheduling Unit Jobs to Maximize Throughput Jobs:  all have processing time (length) = 1  release time r j  deadline.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Vertex Cover, Dominating set, Clique, Independent set
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
The Theory of NP-Completeness
Online Algorithms Motivation and Definitions Paging Problem Competitive Analysis Online Load Balancing.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
Approximation Algorithms
The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each decision is locally optimal. These.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Greedy methods Prudence Wong
Primal-Dual Meets Local Search: Approximating MST’s with Non-uniform Degree Bounds Author: Jochen Könemann R. Ravi From CMU CS 3150 Presentation by Dan.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 Online algorithms Typically when we solve problems and design algorithms we assume that we know all the data a priori. However in many practical situations.
Batch Scheduling of Conflicting Jobs Hadas Shachnai The Technion Based on joint papers with L. Epstein, M. M. Halldórsson and A. Levin.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Graphs A ‘Graph’ is a diagram that shows how things are connected together. It makes no attempt to draw actual paths or routes and scale is generally inconsequential.
A Optimal On-line Algorithm for k Servers on Trees Author : Marek Chrobak Lawrence L. Larmore 報告人:羅正偉.
1 Chapter 5-1 Greedy Algorithms Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
LIMITATIONS OF ALGORITHM POWER
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
Branch and Bound Searching Strategies
Approximation Algorithms
More NP-Complete and NP-hard Problems
COMP108 Algorithmic Foundations Greedy methods
Greedy Algorithms – Chapter 5
Greedy Technique.
Redraw these graphs so that none of the line intersect except at the vertices B C D E F G H.
Lectures on Network Flows
Great Theoretical Ideas in Computer Science
Discrete Mathematics for Computer Science
CSCE350 Algorithms and Data Structure
Enumerating Distances Using Spanners of Bounded Degree
Applied Combinatorics, 4th Ed. Alan Tucker
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Approximation Algorithms
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

11 -1 Chapter 12 On-Line Algorithms

11 -2 On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests of disk servers are totally unknown and arrive one by one. The paging problem We don’t know which pages will be accessed before executing our programs. If the data arrive on-line, we still have to take action to take care of each datum which has just arrived. Since no complete information is available, the action may seem to be correct at this time, but may turn out to be wrong later. All the actions cannot be reversed. On-line algorithms are all approximation algorithms in the sense that they can never guarantee to produce optimal solutions.

11 -3 Competitive Analysis Let C onl (C off ) denote the cost of executing an on-line (optimal off-line) algorithm on the same data set. If C onl  c∙C off + b where b is a constant, we say that the performance ratio of this on-line algorithm is c and this algorithm is c-competitive.

11 -4 On-Line Euclidean Spanning Tree Problem Given a set of points in the plane, our goal is to construct a spanning tree out of these points as small as possible. The points are revealed one by one. Whenever a point is revealed, some action must be taken to connect this point to the already constructed tree.

11 -5 A Greedy Algorithm for On-Line Euclidean Spanning Tree Problem Assume that n points v 1, v 2,..., v n are revealed in the order. When v k arrives, add the shortest edge between v k and v 1, v 2,..., v k  1 to the spanning tree presently constructed.

11 -6 A Greedy Algorithm for On-Line Euclidean Spanning Tree Problem Points arrive in the order as specified.

11 -7 Analysis The algorithm is O(log n)-competitive. Let S denote a set of n points. Let l denote the length of a minimal spanning tree constructed on S. Let T onl denote the spanning tree constructed by our on-line algorithm.

11 -8 Analysis The kth largest edge on T onl has length at most 2l / k, 1  k  n   1. Let S k be the set of points whose additions to the tree T onl would cause T onl to have edges with lengths larger than 2l / k. The length of an optimal traveling salesperson problem tour on S k must be larger than  S k  2l / k  . Since the length of an optimal traveling salesperson problem tour on a set of points is at most two times the length of a minimal spanning tree of the same set of the points, the length of a minimal spanning tree on S k is greater than  S k    l / k  . Since the length of a minimal spanning tree on S k is less than that on S, we have |S k  l / k   < l or equivalently,  S k  < k.

11 -9 Analysis The total length of T onl is thus at most ×O(log n). (The length of T onl ) / l is at most O(log n). A lower bound of the competitive ratio of the problem is O(log n / log log n). The algorithm is near optimal.

The On-Line k-Server Problem We are given a graph with n vertices and each edge is associated with a positive edge length. Let there be k servers stationed at k vertices where k < n. Given a sequence of requests of servers, we have to decide how to move servers around to satisfy the requests. The cost of serving a request is the total distances between the servers moved to satisfy the request.

The On-Line k-Server Problem Three servers s 1, s 2, and s 3 are located at a, e, and g, respectively. Suppose we have a request at vertex i. Then one possible move is to move s 2, to vertex i.

A Worst Case for the Greedy On-Line k-Server Algorithm Moving the nearest server for the request

The Modified Greedy On-Line k- Server Algorithm on Planar Trees A server s i is active with respect to a request located at x; if there are no more servers in the interval (d i, x ], where d i denote the location of s i. Method: When a request is located at x, move all of the active servers with respect to x continuously with the same speed towards x until one of them reaches x. If during this moving period, an active server becomes inactive, then it halts.

Analysis Let denote the value of the potential function after the adversary moves in response to the ith request and before our algorithm makes any move for the ith request. Let denote the value of the potential function after our algorithm makes the move after the ith request and before the (i + 1)th request.

Analysis Let O i and A i denote the costs of our algorithm and the adversary algorithm for the ith request, respectively. Let O and A denote the total costs of our algorithm and the adversary algorithm after all requests are made, respectively If we can prove (1),1  i  n, for some , and (2), 1  i  n, for some , then  O   A +  0 and

Analysis At any time instance, let the k servers of our algorithm be located at b 1, b 2,..., b k, and the k servers of our adversary algorithm be located at a 1, a 2,..., a k. Let us define a bipartite graph with components v 1, v 2,..., v k and v 1, v 2,..., v k where v i (v i ) represents b i (a i ) and the weight of edge (v i, v j ) is  b i, a j . Let M min denote the minimum weighted matching on this bipartite graph. Our potential function is defined as

Analysis Assume that q servers of our algorithm move d distance for the ith request. We have Finally

Analysis The algorithm is k-competitive. A lower bound of the competitive ratio of the problem is k. The algorithm is optimal.

The Obstacle Traversal Problem There is a set of square obstacles. All of the sides of these squares are parallel to the axes. The length of each side of the squares is less than or equal to 1. There are a starting point, denoted as s and a goal point, denoted as t. Our job is to find a shortest path from s to t which avoids the obstacles.

An Algorithm Based on the Balance Strategy Assume that the line from s to t has an angle with the horizontal axis. Let  denote the direction from s to t. Let d denote the distance between s and t.

An Algorithm Based on the Balance Strategy Three cases: Case 1: The searcher is traveling between obstacles. Case 2: The searcher hits the horizontal side of an obstacle. That is, it hits AD. Case 3: The searcher hits the vertical side of an obstacle. That is, it hits AB.

An Algorithm Based on the Balance Strategy Rule 1: When the searcher is traveling between obstacles, it travels in the direction  . That is, it travels as if there were no obstacles. Rule 2: When the searcher hits the horizontal side of a square AD at point E, it travels from E to D and goes up to F such that EF is parallel to the s - t line. Afterwards, it resumes the direction  . (  ED   +  DF  ) /  EF  = cos + sin .

An Algorithm Based on the Balance Strategy Rule 3:If searcher P hits AB within interval BG at H, it would go up to B, travel right until it hits I such that HI is parallel to the s -  t. Afterwards, it resumes the direction  . (   +  ) /  = cos + sin .

An Algorithm Based on the Balance Strategy Rule 4:If searcher P hits AB within interval AG, it either goes up or goes down. If it goes up, it goes to B, turns to the right until it hits corner C as shown in (a). If it goes down, it goes to A, turns to the right until it hits corner D as shown in (b). After hitting the corner, it resumes the direction  .

An Algorithm Based on the Balance Strategy  1 =  JB   +  BC   2 =  JA   +  AD  It was proved that at least one of  1 /  1 and  2 /  2 is no more than 3/2.

An Algorithm Based on the Balance Strategy We partition the interval AG into equal segments of length A segment is labeled up if its lowest point satisfies  1 /  1  and is labeled down if its lowest point satisfies  2 /  2 . A segment is "pure" if it has only one label and others "mixed". Let J i be the lowest point of the ith segment and let J i be the intersection of the line from J i in the direction  with side CD. If the ith segment is pure, we define  i =  D J i  /  C J i  if this segment is labeled only up and  i =  C J i   /  D J i  if this segment is labeled only down.

An Algorithm Based on the Balance Strategy Rule 5: Assume that searcher P hits the interval at point J which belongs to the ith segment. Case 1:J is above the s - t line. Check the label of the ith segment. If it is labeled down, move down. Otherwise, check balance. If balance , move down and subtract from balance; otherwise, add k to balance and move up. Case 2:J is below the s - t line. Check the label of the ith segment. If it is labeled up, then move up. Otherwise, check balance. If balance , move up and subtract from balance; otherwise add k to balance and move down.

An Algorithm Based on the Balance Strategy Rule 6:If searcher P hits the same x- coordinate or y-coordinate of the goal, it goes directly to the goal.

Analysis The distance traveled by our searcher by using our on-line algorithm is no more than 3d/2 when d is large. Our algorithm is optimal because there is no on-line algorithm to solve the problem whose competitive ratio is less than 3/2.

The On-Line Bipartite Matching Problem A bipartite matching M for a bipartite weighted graph G = (V, E) with vertices bipartitioning R and B, each of cardinality n, is a subset of E where no two edges in the set are incident to one single vertex and each edge in the set is incident to both R and B. The vertices in R are all known to us in advance and the vertices in B are revealed one by one. After the ith vertex in B arrives, it must be matched with an unmatched vertex in R and this decision can not be changed later. Our goal is to keep the total weight of this on-line matching small.

A Lower Bound Let b 1,..., b n be the n vertices in B. Let r i, 1  i  n, denote the matched vertex with b i when b i appears. The weight of (b 1, r i ) is 1 for all i. The weight of (b i, r j ) is 0 for i = 2, 3,..., n if j < i. The weight of (b i, r j ) is 2 for i = 2, 3,..., n if j  i.

A Lower Bound For any on-line algorithm with b 1, b 2,..., b n revealed in order, there exists the total matching cost 1 + 2(n  1) = 2n  1. The optimal cost for an off-line algorithm is 1. r 1 matched with b 2 with cost 0. r 2 matched with b 3 with cost 0.  r n matched with b 1 with cost 1. No on-line bipartite matching algorithm can achieve a competitive ratio less than 2n –1.

An Algorithm Based on the Compensation Strategy Let M i  1 denote our on-line matching before b i is revealed. Let M i be an optimal bipartite matching between {b 1, b 2,..., b i } and r 1, r 2,..., r i. such that among all optimal bipartite matchings between {b 1, b 2,..., b i } and r 1, r 2,..., r i,  M i  M i '  1  is the smallest.

An Algorithm Based on the Compensation Strategy Let R i denote the set of elements of R in M i. We can prove that R i only adds one element to R i  1. Since in M i  1, r j is matched with b j for j =1, 2,..., i  1; to compensate, we match r i with b i in M i.

An Algorithm Based on the Compensation Strategy

Analysis Our algorithm is (2n –1)-competitive. Our algorithm is optimal.

The On-Line m-Machine Problem We are given m identical machines and jobs are arriving one by one. The execution time for the ith job is known when the ith job arrives. As soon as a job arrives, it must be assigned immediately to one of the m machines. The goal is to schedule the jobs nonpreemptively on the m machines so as to minimize the makespan, the completion time of the last job.

List Algorithm Assign the arriving job to the machine which has the least total processing time. ( (2  )-competitive) Example: We are given six jobs, denoted as j 1, j 2,..., j 6 and their execution times are 1, 2,..., 6 respectively. Solution by List Algorithm Off-Line Optimal Solution

An Algorithm Based on the Moderation Strategy Let a i denote the execution time of the ith arriving job, where 1  i  n. Assume m  70. Let   [0.445  1 / (2m), / (2m)] and  m be integral. At time i, that is, when i jobs have been scheduled, let R i be the subsequence of the first m machines on the list, and L i the subsequence of the last m  m machines.

An Algorithm Based on the Moderation Strategy Let the sequences of heights of R i and L i be denoted as Rh i and Lh i respectively. Let A(P) and M(P) denote the average and minimum, respectively, of the heights in P, where P is a sequence of heights. Method: When job i +1 arrives, place job i +1 on the first machine in L i, if M(Lh i ) + a i+1  (2  ) A(Rh i ); otherwise, place job i +1 on the first machine on the list R i, the one with least height overall. If necessary, permute the list of machines so that height remains nondecreasing. This algorithm is (2  )-competitive for m  70.