Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw)

Slides:



Advertisements
Similar presentations
Algorithm Design Methods Spring 2007 CSE, POSTECH.
Advertisements

Energy-efficient Task Scheduling in Heterogeneous Environment 2013/10/25.
Covers, Dominations, Independent Sets and Matchings AmirHossein Bayegan Amirkabir University of Technology.
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
CALTECH CS137 Fall DeHon 1 CS137: Electronic Design Automation Day 19: November 21, 2005 Scheduling Introduction.
 Review: The Greedy Method
Price Of Anarchy: Routing
Efficient Scheduling Algorithms for resource management Denis Trystram ID-IMAG CSC, Toulouse, june 23, 2005.
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Evaluating Heuristics for the Fixed-Predecessor Subproblem of Pm | prec, p j = 1 | C max.
Fast Convergence of Selfish Re-Routing Eyal Even-Dar, Tel-Aviv University Yishay Mansour, Tel-Aviv University.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithmic Game Theory and Scheduling Eric Angel, Evripidis Bampis, Fanny Pascual IBISC, University of Evry, France GOTha, 12/05/06, LIP 6.
ISE480 Sequencing and Scheduling Izmir University of Economics ISE Fall Semestre.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Precedence Constrained Scheduling Abhiram Ranade Dept. of CSE IIT Bombay.
1 Algorithmic Game Theoretic Perspectives in Networking Dr. Liane Lewin-Eytan.
Precedence Constrained Scheduling Abhiram Ranade Dept. of CSE IIT Bombay.
The Stackelberg Minimum Spanning Tree Game Jean Cardinal · Erik D. Demaine · Samuel Fiorini · Gwenaël Joret · Stefan Langerman · Ilan Newman · OrenWeimann.
Coordinated Workload Scheduling A New Application Domain for Mechanism Design Elie Krevat.
Greedy vs Dynamic Programming Approach
Windows Scheduling Problems for Broadcast System 1 Amotz Bar-Noy, and Richard E. Ladner Presented by Qiaosheng Shi.
On the Topologies Formed by Selfish Peers Thomas Moscibroda Stefan Schmid Roger Wattenhofer IPTPS 2006 Santa Barbara, California, USA.
Computational Complexity of Approximate Area Minimization in Channel Routing PRESENTED BY: S. A. AHSAN RAJON Department of Computer Science and Engineering,
Convergence Time to Nash Equilibria in Load Balancing Eyal Even-Dar, Tel-Aviv University Alex Kesselman, Tel-Aviv University Yishay Mansour, Tel-Aviv University.
CSE 421 Algorithms Richard Anderson Lecture 6 Greedy Algorithms.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
1 Combinatorial Dominance Analysis Keywords: Combinatorial Optimization (CO) Approximation Algorithms (AA) Approximation Ratio (a.r) Combinatorial Dominance.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment Presented by Pete Perlegos C.L. Liu and James W. Layland.
Scheduling Parallel Task
Scheduling Master - Slave Multiprocessor Systems Professor: Dr. G S Young Speaker:Darvesh Singh.
Minimizing Makespan and Preemption Costs on a System of Uniform Machines Hadas Shachnai Bell Labs and The Technion IIT Tami Tamir Univ. of Washington Gerhard.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© 2009 IBM Corporation 1 Improving Consolidation of Virtual Machines with Risk-aware Bandwidth Oversubscription in Compute Clouds Amir Epstein Joint work.
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
Scheduling policies for real- time embedded systems.
Genome Rearrangements [1] Ch Types of Rearrangements Reversal Translocation
Generating RCPSP instances with Known Optimal Solutions José Coelho Generator and generated instances in:
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Approximation Schemes Open Shop Problem. O||C max and Om||C max {J 1,..., J n } is set of jobs. {M 1,..., M m } is set of machines. J i : {O i1,..., O.
1 Job Scheduling for Grid Computing on Metacomputers Keqin Li Proceedings of the 19th IEEE International Parallel and Distributed Procession Symposium.
Outline Introduction Minimizing the makespan Minimizing total flowtime
Multi-users, multi-organizations, multi-objectives : a single approach Denis Trystram (Grenoble University and INRIA) Collection of results of 3 papers.
1 Uwe Schwiegelshohn, 2 Andrei Tchernykh, 1 Ramin Yahyapour 1 Technische Universität Dortmund, Germany
Instructor Neelima Gupta Table of Contents Factor 2 algorithm for Bin Packing Factor 2 algorithm for Minimum Makespan Scheduling Reference:
Rounding scheme if r * j  1 then r j := 1  When the number of processors assigned in the continuous solution is between 0 and 1 for each task, the speed.
Static Process Scheduling
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
Problems in Combinatorial Optimization. Linear Programming.
Scheduling in computational grids with reservations Denis Trystram LIG-MOAIS Grenoble University, France AEOLUS, march 9, 2007.
Network Formation Games. NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models: Global Connection Game.
Efficient Scheduling Algorithms for resource management
Algorithm Design Methods
On Scheduling in Map-Reduce and Flow-Shops
Exam 2 LZW not on syllabus. 73% / 75%.
Richard Anderson Lecture 6 Greedy Algorithms
Richard Anderson Autumn 2016 Lecture 7
Richard Anderson Lecture 7 Greedy Algorithms
Selfish Load Balancing
Algorithm Design Methods
Algorithm Design Methods
Richard Anderson Winter 2019 Lecture 7
Richard Anderson Autumn 2015 Lecture 7
Algorithm Design Methods
Richard Anderson Autumn 2019 Lecture 7
Presentation transcript:

Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw) Fanny Pascual (LIP6, Paris) Denis Trystram (Grenoble University and INRIA) NCST, CIRM, may 13, 2008

Goal The evolution of high-performance execution platforms leads to distributed entities (organizations) which have their own « local » rules. Our goal is to investigate the possibility of cooperation for a better use of a global computing system made from a collection of autonomous clusters. Work partially supported by the Coregrid Network of Excellence of the EC.

Main result We show in this work that it is always possible to produce an efficient collaborative solution (i.e. with a theoretical performance guarantee) that respects the organizations’ selfish objectives. A new algorithm with theoretical worst case analysis plus experiments for a case study (each cluster has the same objective: makespan).

Target platforms Our view of computational grids is a collection of independent clusters belonging to distinct organizations. … … …… … …

Computational model (one cluster) Independent applications are submitted locally on a cluster. The are represented by a precedence task graph. An application is a parallel rigid job. Let us remind briefly the model (close to what presented Andrei Tchernykh this morning)… See Feitelson for more details and classification.

Cluster J1J2J3…… … Local queue of submitted jobs

Job

overhead Computational area Rigid jobs: the number of processors is fixed.

#of required processors q i Runtime p i

#of required processors q i Runtime p i high jobs (those which require more than m/2 processors) and low jobs (the others).

Scheduling rigid jobs: Packing algorithms Scheduling independent rigid jobs may be solved as a 2D packing Problem (strip packing). List algorithm (off-line). m

Cluster J1J2J3…… … Organization k n organizations. m processors (identical for the sake of presentation) k

… … …… … …

… … …… … …

n organizations (here, n=3, red 1, green 2 and blue 3) Each organization k aims at minimizing « its » own makespan max(C i,k ) We want also that the global makespan is minimized C max (O k ) organization C max (M k ) cluster

n organizations (here, n=3, red 1, green 2 and blue 3) Each organization k aims at minimizing « its » own makespan max(C i,k ) We want also that the global makespan (max over all local ones) is minimized Cmax(O 2 ) Cmax(O 3 )= Cmax(M 1 ) C max (O k ) organization C max (M k ) cluster

Problem statement MOSP: minimization of the « global » makespan under the constraint that no local schedule is increased. Consequence: taking the restricted instance n=1 (one organization) and m=2 with sequential jobs, the problem is the classical 2 machines problem which is NP- hard. Thus, MOSP is NP-hard.

Multi-organizations Motivation: A non-cooperative solution is that all the organizations compute their local jobs (« my job first » policy). However, such a solution is arbitrarly far from the global optimal (it grows to infinity with the number of organizations n). See next example with n=3 for jobs of unit length. no cooperation with cooperation (optimal) O1O1 O2O2 O3O3 O1O1 O2O2 O3O3

Preliminary results Single organization scheduling (packing). Resource constraint list algorithm (Garey-Graham 1975). List-scheduling: (2-1/m) approximation ratio HF: Highest First schedules (sort the jobs by decreasing number of required processors). Same theoretical guaranty but better from the practical point of view.

Analysis of HF (single cluster) high utilization zone (I) (more than 50% of processors are busy) low utilization zone (II) Proposition. All HF schedules have the same structure which consists in two consecutive zones of high (I) and low (II) utilization. Proof. (2 steps) By contracdiction, no high job appears after zone (II) starts

Collaboration between clusters can substantially improve the results. Other more sophisticated algorithms than the simple load balancing are possible: matching certain types of jobs may lead to bilaterally profitable solutions. no cooperationwith cooperation O1O1 O2O2 O1O1 O2O2

If we can not worsen any local makespan, the global optimum can not be reached localglobally optimal O1O1 O2O2 O1O1 O2O2

If we can not worsen any local makespan, the global optimum can not be reached localglobally optimal best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

If we can not worsen any local makespan, the global optimum can not be reached. Lower bound on approximation ratio greater than 3/ best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

Multi-Organization Load-Balancing 1 Each cluster is running local jobs with Highest First LB = max (pmax,W/nm) 2. Unschedule all jobs that finish after 3LB. 3. Divide them into 2 sets (Ljobs and Hjobs) 4. Sort each set according to the Highest first order 5. Schedule the jobs of Hjobs backwards from 3LB on all possible clusters 6. Then, fill the gaps with Ljobs in a greedy manner

let consider a cluster whose last job finishes before 3LB 3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob

3LB Ljob

Notice that the centralized scheduling mechanism is not work stealing, moreover, the idea is to change as minimum as we can the local schedules.

Feasibility (insight) Zone (I) 3LB Zone (I) Zone (II)

Sketch of analysis Case 1: x is a small job. Global surface argument Case 2: x is a high job. Much more complicated, see the paper for technical details Proof by contradiction: let us assume that it is not feasible, and call x the first job that does not fit in a cluster.

3-approximation (by construction) This bound is tight Local HF schedules

3-approximation (by construction) This bound is tight Optimal (global) schedule

3-approximation (by construction) This bound is tight Multi-organization load-balancing

Improvement Local schedules Multi-org LB load balance O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 We add an extra load-balancing procedure O3O3 O1O1 O2O2 O4O4 O5O5 Compact

Some experiments

Conclusion We proved in this work that cooperation may help for a better global performance (for the makespan). We designed a 3-approximation algorithm. It can be extended to any size of organizations (with an extra hypothesis).

Conclusion We proved in this work that cooperation may help for a better global performance (for the makespan). We designed a 3-approximation algorithm. It can be extended to any size of organizations (with an extra hypothesis). Based on this, it remains a lot of interesting open problems, including the study of the problem for different local policies or objectives (Daniel Cordeiro)

Thanks for attention Do you have any questions?

Using Game Theory? We propose here a standard approach using Combinatorial Optimization. Cooperative Game Theory may also be usefull, but it assumes that players (organizations) can communicate and form coalitions. The members of the coalitions split the sum of their playoff after the end of the game. We assume here a centralized mechanism and no communication between organizations.