Download presentation
Presentation is loading. Please wait.
Published byAlison Whitehead Modified over 9 years ago
1
Scheduling Master - Slave Multiprocessor Systems Professor: Dr. G S Young Speaker:Darvesh Singh
2
Abstract 1.Define the master - slave multiprocessor scheduling 2.Provide several applications for the model. 3.O(n log n) algorithms are developed for some of the problems formulated and 4.Some are shown to be NP- Hard.
3
Introduction The Problem of scheduling a multiprocessor computer system has received considerable attention. But a model was developed to schedule a parallel computer system in which the parallel computer operates under control of a host processor. The host processor is referred to as the master processor and the processor in the parallel computer are referred to as the slave processor. e.g. The nCube hypercube. While writing such system, one writes a program that runs on the master computer. This is sequential program that spawns parallel tasks to be run on the slave processor. The number of parallel tasks spawned is always less than or equal to the number of slave processors.
4
Execution The execution of such computer systems we see that in general there are 3 types of time intervals : ●only master is active ●only slave is active ●both master and slave are active So to execute each task at slave we need to do 3 activities: ●Processing :- This is the work the master has to do to collect data needed by the slave and includes the overhead involved in initiating the transfer of this data as well as the code to be run by the slave. ●Slave Work :- This is the work slave must do to complete the assigned computation task received the data and code from the master, and transfer the result back to the master. Delays can be there. ●Post processing :- This is the work the master must do to receive the results and store them in the desired format. It also includes any checking or data combining work the master may do on the results. Examples: a) Multiplication of matrices, b) working on computer vision or VLSI CAD, c) single processor using fork and join method.
5
Some more examples Industrial settings: 1.A consolidator receives orders to manufacture. 1.collect slave agencies. 1.assemble raw material 2.load the trucks,inspect the consignment, ship. 2.The slave processors need : 1.wait for raw material 2.inspect goods 3.manufacture & load goods on the trucks 4.perform inspection,ship. These activities together with the delay represent the slave work. When the goods arrive at the consolidator, they are inspected and inventoried. This represents the post-processing. 1.In certain maintenance/repair environments or industries same process takes place.
6
The master slave scheduling model defined has the following attributes: 1.there is a single master processor 2. there are as many slave processors as parallel jobs. 3. associated with each job, there are three tasks : pre-processing, slave work and post-processing & performed in same order.
7
The master-slave scheduling model may be regarded as a generalized job shop as described below: 1.the job shop has two classes of machines : master and slave 2.there is exactly one master machine and the number of slave machines equals to the number of jobs. 3.each job has three tasks to be done in order: 1 st & 3 rd on the M 2 nd on S Let a i >0, b i >0 and c i >0 respectively, denote the timed needed to perform the three tasks associated with job i, Let n = number of jobs as well as the number of slaves. Figure 1 shows: (a) shows a possible schedule for the case when n = 2,(a 1,b 1,c 1 ) = (2,6,1), and (a 2,b 2,c 2 ) = (1,2,3).
8
In this schedule the pre-processing of job 1 is handled first by the master & all other tasks are begun at the earliest possible time. where: M = master processor S1, S2 = slaves. Finish time = 9 (earliest time for fig 1(a)) The schedule that results when the master pre-processes job 2 first and all other tasks are begun at the earliest possible time in figure 1(b). Finish time =10. The mean finish time is 8.5 for schedule (a) and 8 for a schedule (b). In this example we shall use notations a i,b i,c i to represent both the tasks of job i as well as the time needed to complete these tasks. So when we are scheduling a parallel system using the above model, we are interested in schedules that minimize the finish time.
9
NOTE: In figure,in both the schedules once the processing of a job begins, the job is processed continuously until completion. Schedules with this property are said to have no-wait-in-process. DISCIPLINES ●In industrial applications. Neither of the schedules uses preemption. So preemptions on the slave processors are unnecessary. ●Another interesting feature of the schedule of figure 1 is that in one the post-processing is done in the reverse order of the pre-processing while in the other the pre- and post -processing orders are the same. ●e.g. this could simplify the post-processing if a stack is used, by the master. Similarly, if the master uses a queue to maintain this information, we might require that the post- & pre-processing to be done in same order. ●The master should complete all the pre-processing tasks before beginning the first post-processing task.
10
Moreover for industrial setting, one could generalize the model to permit several master processors. In both the computer and industrial settings, one could have nonhomogeneous slave processors. In this case with each slave task we have a list of slaves on which it can be run and the time needed on each slave. Various kinds of schedule optimization problems: ●We show that for obtaining MFTNW(minimum finish time no wait in process) schedules is NP-Hard for following: o Each job’s pre-processing must be done before its post-processing. o The pre-processing and post-processing orders are the same. ●We develop O(n log n) algo to minimize finish time for following: o The pre-,post-processing are the same. o the pre-processing order is reverse of post.
11
NP-Hard proofs uses the subset sum problem which is known to be NP-Hard[GARE79]. Input: A collection of positive integers x i, 1 ≤ i ≤ n and a positive integer M. Output: “Yes” iff there is a subset with sum exactly equal to M. ●In no wait case, the master processor cannot preempt any job as such a preemption would violate the no wait constraint. Since a n+1 > c n+1 > b n+2, the processing(pre- & post-) tasks of job n+1 cannot be done while a slave is working on job n+2 and vice- versa. Finish time for every no wait process = f (i.e. at least time of the task times of these two jobs) No Wait In Process
12
There are exactly two templates of schedules with this length. One has job n+1 processed before n+2 and the other has n+2 preceding n+1. i.e. Figure 2 : Template for NP-Hard
13
case 1: There is at least one job whose pre-processing is done before a n+1 and whose post - processing is done after a n+1. case 2: There is at least one job whose pre- and post- processing are done before n=a n+1. case 3: Task a n+1 is the first task scheduled. fig 3(b) figure 3 : Template for order preserving NP-Hard proof Cases
14
Same Pre- & P0st- Processing Orders In this we develop an O( n log n) algorithm to construct an order preserving minimum finish time (OPMFT) schedule. Without the loss of generality, There are following restrictions on schedules need to consider : R1: The schedules are non-preemptive. R2: Slave tasks begin as soon as their corresponding pre-processing tasks are complete. R3: Each post-processing tasks begins as soon as after the completion of its slave task as is consistent with the order preserving constraint. Some properties of order preserving schedules are developed that satisfy these assumptions. Definition: A canonical order preserving schedules (COPS) is an order preserving schedule in which (a) master processor completes the pre-processing tasks of all jobs before beginning any of the the post-processing tasks, and (b) the pre-processing tasks begin at time zero and complete at time ∑ n i=1 a i. Because of restrictions R1,R2, & R3 evry COPS is uniquely described by providing the order in which pre-processing is done.
15
Reverse Order Post-Processing While there are no-wait-process-master-slave instances that are infeasible when the post-processing order is required to be the reverse of the pre-processing order, this is not the case when the no-wait constraint is removed. For any given pre-processing permutation,, we can construct a reverse-order schedules as below: 1.the master-slave processes the n jobs in the order. 2.slave i begins the slaves processing job i as soon as the master completes its pre-processing 3.the master begins the post-processing of the last job (say k) in as soon as its slave task is complete. 4.the master begins the post-processing of job j≠ k at the later of the two times a.when it has finished the post -processing of the successor of j in b.when slave j has finished b j Schedules constructed in the above manner will be referred to as canonical reverse order schedules (CROS). If the is given then the corresponding CROS is unique. It is easy to establish that every minimum finish-time reverse order schedule is a CROS.
16
Conclusion ● So in above scheduling process we have introduced the master slave scheduling model. ● We have shown that obtaining minimum finish-time schedules under the no-wait-in-process constraint is NP-Hard when the schedule is required to be order preserving as well as when no constraint is imposed between the pre- and post- processing orders. ● The no-wait-in-process minimum finish time problem is solvable in O(n log n) time when the post processing order is required to be the the reverse of the pre-processing order.
17
References Scheduling Master-Slave Multiprocessor systems - by Sartaj Sahni(computer and information sciences department, University of Florida)
18
Thank You
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.