Download presentation
Presentation is loading. Please wait.
Published byDaniel Manning Modified over 9 years ago
1
Called as the Interval Scheduling Problem. A simpler version of a class of scheduling problems. – Can add weights. – Can add multiple resources – Can ask for scheduling all the requests so as to minimize some objective function. – … Interval Scheduling.
2
How to pick the largest number of non-overlapping intervals? In principle – Decide which one to pick first, say I 1, based on some rule. – Remove all intervals that overlap with I 1 from consideration. – Pick the next one, using the same rule. Such rule based algorithms, especially when the rule is local, are called as greedy algorithms. – Careful: May not work always. Interval Scheduling
3
Some first rules for this problem. Suppose we sort the intervals based on their start times. Rule 1: Among the available intervals, pick the one that starts the earliest. – Intuitively, should maximize the usage of the resource. – But, intuition can be often misleading. Interval Scheduling
4
Early Start rule does not work! Interval Scheduling
5
Why did Rule 1 fail? – At least from the picture, we think that the request that starts early may not release the resource early enough. – Miss considering several requests in that case. Rule 2: Shortest Duration. – Should free the resource as quickly as possible. – Does it work? – Question: Try if this works. Interval Scheduling
6
Rule 2 also does not work. Interval Scheduling
7
The second rule did not work because the interval with the shortest duration may actually make unavailable several non-overlapping intervals. Yet another attempt. Rule 3: Early Finish – Pick the request that finishes the earliest. – Intuitively, keeps the resource available for future requests as quickly as possible. – Skeptically, will it work? Given the failure of the other rules. Interval Scheduling
8
Algorithm EarlyFinish( I ) Begin Sort the requests by increasing order of finish times as S = { r 1, r 2, …, r n } i = 1; A = ; While S is not empty do Add r i to the solution A Delete from S all requests that overlap with r i i = Next(i); //index of the next compatible request End-while Return A Interval Scheduling
9
Question: What is the runtime of the algorithm. Give a brief justification. Interval Scheduling
10
In our proof, we let O be any optimal (best possible) solution. O = {o 1, o 2, …, o m } is the set of requests that are accepted by the optimal solution. Let A be the solution produced by our algorithm (according to Rule 3). A = {a 1, a 2, …, a k }. Ideally, we would A = O, which means that all elements of A are in O and vice-versa. That may not be true always. We should just settle for |A| = | O |. Interval Scheduling
11
We therefore prove that |A| = | O |. To do so, we show that our algorithm releases the resource on or before O does. – In a way, the algorithm ``stays ahead’’ of the game. We then use this property to show via contradiction that indeed |A| = | O |. Interval Scheduling
12
Proof of Algorithm Stays Ahead by induction. Let s(i) and f(i) denote the start and finish times of request i, sorted by s(i). Let us also sort the elements of O in increasing order of their finish times. We want to show that for every i between 1 and k (both inclusive), f(a i ) <= f(o i ). Base case: i = 1. By the choice of the algorithm, we pick the request with the earliest finish time. So, f(a 1 ) <= f(o 1 ), always. Interval Scheduling
13
Hypothesis: Let us assume that for the ith request in A, a i, it holds that f(a i ) <= f(o i ). Step: Let us consider ai+1. We have s(a i+1 ) >= f(a i ) and s(o i+1 ) >= f(o i ). By hypothesis, we also have that f(a i ) = f(a i ). Putting together, we have that s(o i+1 ) >= f(a i ). So, both requests a i+1 and o i+1 are available at the i+1 st step of the algorithm. If indeed the algorithm picked ai+1 instead of o i+1, then f(a i+1 ) <= f(o i+1 ). Interval Scheduling
14
Why does the above help? It tells that the algorithm CAN accept as many as the best solution did. Now we show that |A| = | O |. Suppose that |A| = k and | O | = m, and m > k. Now, apply the above result for the request a k. We get that indeed O k+1 is available for the algorithm. So, the algorithm could not have stopped without considering O m+1. Interval Scheduling
15
Our algorithm follows what is called as a greedy algorithm design. The design principle is based on the following ideas – The solution is built incrementally – At each step, the algorithm can look at the currently built-up solution, and the input, – Make a decision (the decision cannot be changed later). The decision is made using a rule that is called as the greedy rule. The Algorithm Design
16
Notice that not all greedy rules work to get the optimal solution. Therefore, in general, need to prove that the used greedy rule works. Good to know that there are also proof design strategies. – Algorithm Stays Ahead is one such. – Structure based Proof – Exchange argument The Algorithm Design
17
Consider the following scheduling problem. Often occurs in many setting such as register allocation, classroom allocation, coloring,… Formally, there are requests with start and finish times. Have to satisfy all the requests by using as few resources as possible. The constraint is that the same resource cannot be used by more than one request at any given time. Example – Structure Based
18
Several ways to solve this problem in the literature. Can covert this to a problem on graphs. – Called Interval Graph Coloring We will not use graphs, but commonalities in the solution remain. Resource Allocation
19
How many resources would ANY solution need? For each request i, denote the interval of the request as the line segment between s(i) to f(i). Define the depth of an interval I as the number of intervals that overlap I. Define the depth of an input as the maximum depth of any interval in the input. Since any resource cannot be used simultaneously by overlapping intervals, any solution needs depth(Input) many resources. Resource Allocation
20
The lower bound on the number of resources is a structurally valid property. Does not depend on what algorithms are used etc. Even the best possible (optimal) solution needs as many resources. Now, we should just think of designing an algorithm that takes no more than the above number of resources. – Works as a proof of correctness too. Resource Allocation
21
Our algorithm is based on the following simple rule. Start a new resource if no previous resource is available. Viewed as steps of the greedy algorithm: – The incremental steps are to assign a resource to the current request. – The algorithm looks at the currently available resources, and – DECIDES whether to start one more resource. – Of course, once started, a resource cannot be taken back. Resource Allocation
22
Question: Express the above as an algorithm and state and justify its runtime. Resource Allocation
23
We will show that our algorithm uses exactly d = Depth(Input) many resources. Can actually argue that if our algorithm uses d resources, for any d, then there is an interval that has such a depth. Resource Allocation
24
Still on scheduling for this example too. Turns out that scheduling problems is a vast and fertile area for research. – More papers written even currently… Recall the proof of our Generic MST Algorithm – We modified an MST T to get another tree T’ such that Wt(T’) <= Wt(T). – Plus, T’ is (closer to) what the algorithm builds. – So, we modify the optimal solution to make it look like our solution, without diluting the quality of the solution An Example – Exchange Argument
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.