Scheduling Master - Slave Multiprocessor Systems Professor: Dr. G S Young Speaker:Darvesh Singh.

Slides:



Advertisements
Similar presentations
On the Complexity of Scheduling
Advertisements

Problems and Their Classes
Covers, Dominations, Independent Sets and Matchings AmirHossein Bayegan Amirkabir University of Technology.
JAYASRI JETTI CHINMAYA KRISHNA SURYADEVARA
Production and Operations Management Systems
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Precedence Constrained Scheduling Abhiram Ranade Dept. of CSE IIT Bombay.
Parameterized Approximation Scheme for the Multiple Knapsack Problem by Klaus Jansen (SODA’09) Speaker: Yue Wang 04/14/2009.
Compiler Challenges, Introduction to Data Dependences Allen and Kennedy, Chapter 1, 2.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Connection Preemption in Multi-Class Networks Fahad Rafique Dogar Carnegie Mellon University, USA Collaborators: Laeeq Aslam and Zartash Uzmi (LUMS, Pakistan)
NP-Complete Problems Problems in Computer Science are classified into
Computational Complexity, Physical Mapping III + Perl CIS 667 March 4, 2004.
Computational Complexity of Approximate Area Minimization in Channel Routing PRESENTED BY: S. A. AHSAN RAJON Department of Computer Science and Engineering,
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Lot sizing and scheduling
Sequencing Problem.
Minimizing Flow Time on Multiple Machines Nikhil Bansal IBM Research, T.J. Watson.
INTRODUCTION TO SCHEDULING
A Categorization of Real-Time Multiprocessor Scheduling Problems and Algorithms Presentation by Tony DeLuce CS 537 Scheduling Algorithms Spring Quarter.
CHP-4 QUEUE.
Copyright © Cengage Learning. All rights reserved.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Scheduling policies for real- time embedded systems.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
CPU Scheduling Gursharan Singh Tatla 1-Feb-20111www.eazynotes.com.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Approximation Schemes Open Shop Problem. O||C max and Om||C max {J 1,..., J n } is set of jobs. {M 1,..., M m } is set of machines. J i : {O i1,..., O.
Minimizing Stall Time in Single Disk Susanne Albers, Naveen Garg, Stefano Leonardi, Carsten Witt Presented by Ruibin Xu.
Resource Mapping and Scheduling for Heterogeneous Network Processor Systems Liang Yang, Tushar Gohad, Pavel Ghosh, Devesh Sinha, Arunabha Sen and Andrea.
Chapter 10 Algorithm Analysis.  Introduction  Generalizing Running Time  Doing a Timing Analysis  Big-Oh Notation  Analyzing Some Simple Programs.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
1 Job Scheduling for Grid Computing on Metacomputers Keqin Li Proceedings of the 19th IEEE International Parallel and Distributed Procession Symposium.
Copyright © Cengage Learning. All rights reserved.
Operational Research & ManagementOperations Scheduling Economic Lot Scheduling 1.Summary Machine Scheduling 2.ELSP (one item, multiple items) 3.Arbitrary.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Operating Systems (CS 340 D) Dr. Abeer Mahmoud Princess Nora University Faculty of Computer & Information Systems Computer science Department.
Introduction to Real-Time Systems
1 Optimization Techniques Constrained Optimization by Linear Programming updated NTU SY-521-N SMU EMIS 5300/7300 Systems Analysis Methods Dr.
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
Classical Control in Quantum Programs Dominique Unruh IAKS, Universität Karlsruhe Founded by the European Project ProSecCo IST
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Concurrency and Performance Based on slides by Henri Casanova.
Introduction to NP Instructor: Neelima Gupta 1.
An Algorithm for the Consecutive Ones Property Claudio Eccher.
Clock Driven Scheduling
Approximation Algorithms based on linear programming.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Chapter 10 NP-Complete Problems.
PERFORMANCE EVALUATIONS
Scheduling Determines the precise start time of each task.
CHAPTER 8 Operations Scheduling
Copyright © Cengage Learning. All rights reserved.
Design and Analysis of Algorithm
ICS 353: Design and Analysis of Algorithms
Objective of This Course
Chapter5: CPU Scheduling
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979.
§1—2 State-Variable Description The concept of state
Shortest-Job-First (SJR) Scheduling
Sources of Constraints in Computations
Presentation transcript:

Scheduling Master - Slave Multiprocessor Systems Professor: Dr. G S Young Speaker:Darvesh Singh

Abstract 1.Define the master - slave multiprocessor scheduling 2.Provide several applications for the model. 3.O(n log n) algorithms are developed for some of the problems formulated and 4.Some are shown to be NP- Hard.

Introduction The Problem of scheduling a multiprocessor computer system has received considerable attention. But a model was developed to schedule a parallel computer system in which the parallel computer operates under control of a host processor. The host processor is referred to as the master processor and the processor in the parallel computer are referred to as the slave processor. e.g. The nCube hypercube. While writing such system, one writes a program that runs on the master computer. This is sequential program that spawns parallel tasks to be run on the slave processor. The number of parallel tasks spawned is always less than or equal to the number of slave processors.

Execution The execution of such computer systems we see that in general there are 3 types of time intervals : ●only master is active ●only slave is active ●both master and slave are active So to execute each task at slave we need to do 3 activities: ●Processing :- This is the work the master has to do to collect data needed by the slave and includes the overhead involved in initiating the transfer of this data as well as the code to be run by the slave. ●Slave Work :- This is the work slave must do to complete the assigned computation task received the data and code from the master, and transfer the result back to the master. Delays can be there. ●Post processing :- This is the work the master must do to receive the results and store them in the desired format. It also includes any checking or data combining work the master may do on the results. Examples: a) Multiplication of matrices, b) working on computer vision or VLSI CAD, c) single processor using fork and join method.

Some more examples Industrial settings: 1.A consolidator receives orders to manufacture. 1.collect slave agencies. 1.assemble raw material 2.load the trucks,inspect the consignment, ship. 2.The slave processors need : 1.wait for raw material 2.inspect goods 3.manufacture & load goods on the trucks 4.perform inspection,ship. These activities together with the delay represent the slave work. When the goods arrive at the consolidator, they are inspected and inventoried. This represents the post-processing. 1.In certain maintenance/repair environments or industries same process takes place.

The master slave scheduling model defined has the following attributes: 1.there is a single master processor 2. there are as many slave processors as parallel jobs. 3. associated with each job, there are three tasks : pre-processing, slave work and post-processing & performed in same order.

The master-slave scheduling model may be regarded as a generalized job shop as described below: 1.the job shop has two classes of machines : master and slave 2.there is exactly one master machine and the number of slave machines equals to the number of jobs. 3.each job has three tasks to be done in order: 1 st & 3 rd on the M 2 nd on S Let a i >0, b i >0 and c i >0 respectively, denote the timed needed to perform the three tasks associated with job i, Let n = number of jobs as well as the number of slaves. Figure 1 shows: (a) shows a possible schedule for the case when n = 2,(a 1,b 1,c 1 ) = (2,6,1), and (a 2,b 2,c 2 ) = (1,2,3).

In this schedule the pre-processing of job 1 is handled first by the master & all other tasks are begun at the earliest possible time. where: M = master processor S1, S2 = slaves. Finish time = 9 (earliest time for fig 1(a)) The schedule that results when the master pre-processes job 2 first and all other tasks are begun at the earliest possible time in figure 1(b). Finish time =10. The mean finish time is 8.5 for schedule (a) and 8 for a schedule (b). In this example we shall use notations a i,b i,c i to represent both the tasks of job i as well as the time needed to complete these tasks. So when we are scheduling a parallel system using the above model, we are interested in schedules that minimize the finish time.

NOTE: In figure,in both the schedules once the processing of a job begins, the job is processed continuously until completion. Schedules with this property are said to have no-wait-in-process. DISCIPLINES ●In industrial applications. Neither of the schedules uses preemption. So preemptions on the slave processors are unnecessary. ●Another interesting feature of the schedule of figure 1 is that in one the post-processing is done in the reverse order of the pre-processing while in the other the pre- and post -processing orders are the same. ●e.g. this could simplify the post-processing if a stack is used, by the master. Similarly, if the master uses a queue to maintain this information, we might require that the post- & pre-processing to be done in same order. ●The master should complete all the pre-processing tasks before beginning the first post-processing task.

Moreover for industrial setting, one could generalize the model to permit several master processors. In both the computer and industrial settings, one could have nonhomogeneous slave processors. In this case with each slave task we have a list of slaves on which it can be run and the time needed on each slave. Various kinds of schedule optimization problems: ●We show that for obtaining MFTNW(minimum finish time no wait in process) schedules is NP-Hard for following: o Each job’s pre-processing must be done before its post-processing. o The pre-processing and post-processing orders are the same. ●We develop O(n log n) algo to minimize finish time for following: o The pre-,post-processing are the same. o the pre-processing order is reverse of post.

NP-Hard proofs uses the subset sum problem which is known to be NP-Hard[GARE79]. Input: A collection of positive integers x i, 1 ≤ i ≤ n and a positive integer M. Output: “Yes” iff there is a subset with sum exactly equal to M. ●In no wait case, the master processor cannot preempt any job as such a preemption would violate the no wait constraint. Since a n+1 > c n+1 > b n+2, the processing(pre- & post-) tasks of job n+1 cannot be done while a slave is working on job n+2 and vice- versa. Finish time for every no wait process = f (i.e. at least time of the task times of these two jobs) No Wait In Process

There are exactly two templates of schedules with this length. One has job n+1 processed before n+2 and the other has n+2 preceding n+1. i.e. Figure 2 : Template for NP-Hard

case 1: There is at least one job whose pre-processing is done before a n+1 and whose post - processing is done after a n+1. case 2: There is at least one job whose pre- and post- processing are done before n=a n+1. case 3: Task a n+1 is the first task scheduled. fig 3(b) figure 3 : Template for order preserving NP-Hard proof Cases

Same Pre- & P0st- Processing Orders In this we develop an O( n log n) algorithm to construct an order preserving minimum finish time (OPMFT) schedule. Without the loss of generality, There are following restrictions on schedules need to consider : R1: The schedules are non-preemptive. R2: Slave tasks begin as soon as their corresponding pre-processing tasks are complete. R3: Each post-processing tasks begins as soon as after the completion of its slave task as is consistent with the order preserving constraint. Some properties of order preserving schedules are developed that satisfy these assumptions. Definition: A canonical order preserving schedules (COPS) is an order preserving schedule in which (a) master processor completes the pre-processing tasks of all jobs before beginning any of the the post-processing tasks, and (b) the pre-processing tasks begin at time zero and complete at time ∑ n i=1 a i. Because of restrictions R1,R2, & R3 evry COPS is uniquely described by providing the order in which pre-processing is done.

Reverse Order Post-Processing While there are no-wait-process-master-slave instances that are infeasible when the post-processing order is required to be the reverse of the pre-processing order, this is not the case when the no-wait constraint is removed. For any given pre-processing permutation,, we can construct a reverse-order schedules as below: 1.the master-slave processes the n jobs in the order. 2.slave i begins the slaves processing job i as soon as the master completes its pre-processing 3.the master begins the post-processing of the last job (say k) in as soon as its slave task is complete. 4.the master begins the post-processing of job j≠ k at the later of the two times a.when it has finished the post -processing of the successor of j in b.when slave j has finished b j Schedules constructed in the above manner will be referred to as canonical reverse order schedules (CROS). If the is given then the corresponding CROS is unique. It is easy to establish that every minimum finish-time reverse order schedule is a CROS.

Conclusion ● So in above scheduling process we have introduced the master slave scheduling model. ● We have shown that obtaining minimum finish-time schedules under the no-wait-in-process constraint is NP-Hard when the schedule is required to be order preserving as well as when no constraint is imposed between the pre- and post- processing orders. ● The no-wait-in-process minimum finish time problem is solvable in O(n log n) time when the post processing order is required to be the the reverse of the pre-processing order.

References Scheduling Master-Slave Multiprocessor systems - by Sartaj Sahni(computer and information sciences department, University of Florida)

Thank You