Multi-users, multi-organizations, multi-objectives : a single approach Denis Trystram (Grenoble University and INRIA) Collection of results of 3 papers.

Slides:



Advertisements
Similar presentations
Impact of Interference on Multi-hop Wireless Network Performance Kamal Jain, Jitu Padhye, Venkat Padmanabhan and Lili Qiu Microsoft Research Redmond.
Advertisements

Algorithm Design Methods Spring 2007 CSE, POSTECH.
Energy-efficient Task Scheduling in Heterogeneous Environment 2013/10/25.
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
CALTECH CS137 Fall DeHon 1 CS137: Electronic Design Automation Day 19: November 21, 2005 Scheduling Introduction.
 Review: The Greedy Method
Class-constrained Packing Problems with Application to Storage Management in Multimedia Systems Tami Tamir Department of Computer Science The Technion.
Price Of Anarchy: Routing
Efficient Scheduling Algorithms for resource management Denis Trystram ID-IMAG CSC, Toulouse, june 23, 2005.
Properties of SPT schedules Eric Angel, Evripidis Bampis, Fanny Pascual LaMI, university of Evry, France MISTA 2005.
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Fast Convergence of Selfish Re-Routing Eyal Even-Dar, Tel-Aviv University Yishay Mansour, Tel-Aviv University.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithmic Game Theory and Scheduling Eric Angel, Evripidis Bampis, Fanny Pascual IBISC, University of Evry, France GOTha, 12/05/06, LIP 6.
Confidentiality/date line: 13pt Arial Regular, white Maximum length: 1 line Information separated by vertical strokes, with two spaces on either side Disclaimer.
Evolutionary Game Algorithm for continuous parameter optimization Alireza Mirian.
ISE480 Sequencing and Scheduling Izmir University of Economics ISE Fall Semestre.
How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan.
Regret Minimization and the Price of Total Anarchy Paper by A. Blum, M. Hajiaghayi, K. Ligett, A.Roth Presented by Michael Wunder.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
1 Algorithmic Game Theoretic Perspectives in Networking Dr. Liane Lewin-Eytan.
The Stackelberg Minimum Spanning Tree Game Jean Cardinal · Erik D. Demaine · Samuel Fiorini · Gwenaël Joret · Stefan Langerman · Ilan Newman · OrenWeimann.
CRESCCO Project IST Work Package 2 Algorithms for Selfish Agents V. Auletta, P. Penna and G. Persiano Università di Salerno
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis By Byung-Gon Chun et al. UC Berkeley PODC’04.
Nov 2003Group Meeting #2 Distributed Optimization of Power Allocation in Interference Channel Raul Etkin, Abhay Parekh, and David Tse Spectrum Sharing.
Load Balancing, Multicast routing, Price of Anarchy and Strong Equilibrium Computational game theory Spring 2008 Michal Feldman.
CISS Princeton, March Optimization via Communication Networks Matthew Andrews Alcatel-Lucent Bell Labs.
Convergence Time to Nash Equilibria in Load Balancing Eyal Even-Dar, Tel-Aviv University Alex Kesselman, Tel-Aviv University Yishay Mansour, Tel-Aviv University.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Impact of Problem Centralization on Distributed Constraint Optimization Algorithms John P. Davin and Pragnesh Jay Modi Carnegie Mellon University School.
Price of Anarchy Bounds Price of Anarchy Convergence Based on Slides by Amir Epstein and by Svetlana Olonetsky Modified/Corrupted by Michal Feldman and.
1 A Cooperative Game Framework for QoS Guided Job Allocation Schemes in Grids Riky Subrata, Member, IEEE, Albert Y. Zomaya, Fellow, IEEE, and Bjorn Landfeldt,
Scheduling Parallel Task
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Network Aware Resource Allocation in Distributed Clouds.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
Introduction to Job Shop Scheduling Problem Qianjun Xu Oct. 30, 2001.
© 2009 IBM Corporation 1 Improving Consolidation of Virtual Machines with Risk-aware Bandwidth Oversubscription in Compute Clouds Amir Epstein Joint work.
임규찬. 1. Abstract 2. Introduction 3. Design Goals 4. Sample-Based Scheduling for Parallel Jobs 5. Implements.
1 The Price of Defense M. Mavronicolas , V. Papadopoulou , L. Michael ¥, A. Philippou , P. Spirakis § University of Cyprus, Cyprus  University of Patras.
1 Andreea Chis under the guidance of Frédéric Desprez and Eddy Caron Scheduling for a Climate Forecast Application ANR-05-CIGC-11.
Scheduling tasks from selfish multi-tasks agents Johanne Cohen, CNRS, LRI, Orsay Fanny Pascual, Université Pierre et Marie Curie (UPMC), LIP6, Paris.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Approximation Schemes Open Shop Problem. O||C max and Om||C max {J 1,..., J n } is set of jobs. {M 1,..., M m } is set of machines. J i : {O i1,..., O.
Outline Introduction Minimizing the makespan Minimizing total flowtime
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Competitive Scheduling in Wireless Networks with Correlated Channel State Ozan.
Operational Research & ManagementOperations Scheduling Economic Lot Scheduling 1.Summary Machine Scheduling 2.ELSP (one item, multiple items) 3.Arbitrary.
Analysis of cooperation in multi-organization Scheduling Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese school, Warsaw)
Hedonic Clustering Games Moran Feldman Joint work with: Seffi Naor and Liane Lewin-Eytan.
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
Vasilis Syrgkanis Cornell University
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
Scheduling in computational grids with reservations Denis Trystram LIG-MOAIS Grenoble University, France AEOLUS, march 9, 2007.
Feb 23, 2010 University of Liverpool Minimizing Total Busy Time in Parallel Scheduling with Application to Optical Networks Michele Flammini – University.
Network Formation Games. NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models: Global Connection Game.
Network Formation Games. NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models: Global Connection Game.
Efficient Scheduling Algorithms for resource management
Some Topics in OR.
Algorithm Design Methods
Lecture 8: Dispatch Rules
Algorithmic Game Theory and Internet Computing
On Scheduling in Map-Reduce and Flow-Shops
The Subset Sum Game Revisited
Selfish Load Balancing
Algorithm Design Methods
Algorithm Design Methods
Algorithm Design Methods
Presentation transcript:

Multi-users, multi-organizations, multi-objectives : a single approach Denis Trystram (Grenoble University and INRIA) Collection of results of 3 papers with: Pierre-François Dutot (Grenoble University) Krzysztof Rzadca (Polish-Japanese computing school, Warsaw) Fanny Pascual (LIP6, Paris) Erik Saule (Grenoble university) Aussois, may 19, 2008

Goal The evolution of high-performance execution platforms leads to physical or logical distributed entities (organizations) which have their own « local » rules, each organization is composed of multiple users that compete for the resources, and they aim to optimize their own objectives… Construct a framework for studying such problems. Work partially supported by the Coregrid Network of Excellence of the EC.

content Brief review of basic (computational) models Multi-users scheduling (1 resource) Multi-users scheduling (m resources) Multi-organizations scheduling (1 objective) Multi-organizations with mixed objectives

Computational model A set of users have some (parallel) applications to execute on a (parallel) machine. The « machine » belongs or not to multiple organizations. The objectives of the users are not always the same.

Multi-users optimization Let us start by a simple case : several users compete for resources belonging to the same organization. System centered problems (Cmax, load-balancing) Users centered (minsum, maxstretch, flowtime) Motivation: Take the diversity of users’ wishes/needs into account

A simple example Blue (4 tasks duration 3,4,4 and 5) has a program to compile (Cmax) Red (3 tasks duration 1,3 and 6) is running experiments (  Ci) m=3 (machines) Global LPT schedule Cmax = 9  Ci = = 23

A simple example Blue (4 tasks duration 3,4,4 and 5) has a program to compile (Cmax) Red (3 tasks duration 1,3 and 6) is running experiments (  Ci) m=3 (machines) Global LPT schedule Cmax = 9  Ci = = 23 SPT schedule for red Cmax = 8  Ci = = 15

Description of the problem Instance: k users, user u submit n(u) tasks, processing time of task i belonging to u: p i (u) Completion time: C i (u) Each user can choose his-her objective among: C max (u) = max (C i (u)) or  C i (u) weighted or not Multi-user scheduling problem: MUSP(k’:  C i ;k’’:C max ) where k’+k’’=k

Complexity [Agnetis et al. 2004], case m=1 MUSP(2:  C i ) is NP-hard in the ordinary sense MUSP(2:C max ) and MUSP(1:  C i ;1:C max ) are polynomial Thus, on m machines, all variants of this problem are NP-hard We are looking for approximation (multi-objective)

MUSP(k:C max ) Inapproximability: no algorithm better than (1,2,…,k) Proof: consider the instance where each user has one unit task (p i (u)=1) on one machine (m=1). C max *(u) = 1 and there is no other choice than:...

MUSP(k:C max ) Inapproximability: no algorithm better than (1,2,…,k) Proof: consider the instance where each user has one unit task (p i (u)=1) on one machine. C max *(u) = 1 and there is no other choice than:... Thus, there exists a user u whose C max (u) = k

MUSP(k:C max ) Algorithm (multiCmax): Given a  -approximation schedule  for each user C max (u) ≤  C max *(u) Sort the users by increasing values of C max (u) Analysis: multiCmax is a ( ,2 , …, k  )-approximation.

MUSP(k:  C i ) Inapproximability: no algorithm better than ((k+1)/2,(k+2)/2,…,k) Proof: consider the instance where each user has x Tasks p i (u) = 2 i-1. Optimal schedule:  C i * = 2 x+1 - (x+2) SPT is Pareto Optimal (3 users blue, green and red):... For all u,  C i SPT (u) = k(2 x -(x+1)) + (2 x -1) u Ratio to the optimal = (k+u)/2 for large x

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT)

MUSP(k:  C i ) Algorithm (single machine) Aggreg: Let  (u) be the schedule for user u. Construct a schedule by increasing order of  C i (  (u)) (global SPT) Analysis: Aggreg is (k,k,…,k)-approximation

MUSP(k:  C i ) Algorithm (extension to m machines): The previous property still holds on each machine (using SPT individually on each machine) Local SPT Merge on each machine

MUSP(k:  C i ) Algorithm (extension to m machines): The previous property still holds on each machine (using SPT individually on each machine) Analysis: we obtain the same bound as before.

Mixed case MUSP(k’:  C i ;(k-k’):C max ) A similar analysis can be done, see the paper with Erik Saule for more details

Complicating the model: Multi-organizations

Context: computational grids Collection of independent clusters managed locally by an « organization ». … … … … … … Organization O 2 Organization O 3 Organization O 1 m 2 machines m 1 machines m 3 machines

Preliminary: s ingle resource: cluster Independent applications are submitted locally on a cluster. The are represented by a precedence task graph. An application is viewed as a usual sequential task or as a parallel rigid job (see [Feitelson and Rudolph] for more details and classification).

Cluster J1J2J3…… … Local queue of submitted jobs

Job

overhead Computational area Rigid jobs: the number of processors is fixed.

#of required processors q i Runtime p i

#of required processors q i Runtime p i Useful definitions: high jobs (those which require more than m/2 processors) low jobs (the others).

Scheduling rigid jobs: Packing algorithms (batch) Scheduling independent rigid jobs may be solved as a 2D packing Problem (strip packing). m

Cluster J1J2J3…… … Multi-organizations Organization k n organizations. m processors k

… … … … … … users submit their jobs locally O2O2 O1O1 O3O3

… … … … … … The organizations can cooperate O1O1 O2O2 O3O3

Constraints Cmax(O 3 ) Cmax(O 2 ) O1O1 O2O2 O3O3 C max (O k ) : maximum finishing time of jobs belonging to O k. Each organization aims at minimizing its own makespan. Cmax(O 1 ) O1O1 O2O2 O3O3 Cmax loc (O 1 ) Local schedules:

Problem statement MOSP: minimization of the « global » makespan under the constraint that no local schedule is increased. Consequence: taking the restricted instance n=1 (one organization) and m=2 with sequential jobs, the problem is the classical 2 machines problem which is NP- hard. Thus, MOSP is NP-hard.

Multi-organizations Motivation: A non-cooperative solution is that all the organizations compute their local jobs (« my job first » policy). However, such a solution is arbitrarly far from the global optimal (it grows to infinity with the number of organizations n). See next example with n=3 for jobs of unit length. no cooperation with cooperation (optimal) O1O1 O2O2 O3O3 O1O1 O2O2 O3O3

no cooperationwith cooperation O1O1 O2O2 O1O1 O2O2 More sophisticated algorithms than the simple load balancing are possible: matching certain types of jobs may lead to bilaterally profitable solutions. However, it is a hard combitanorial problem

Preliminary results List-scheduling: (2-1/m) approximation ratio for the variant with resource constraint [Garey-Graham 1975]. HF: Highest First schedules (sort the jobs by decreasing number of required processors). Same theoretical guaranty but perform better from the practical point of view.

Analysis of HF (single cluster) high utilization zone (I) (more than 50% of processors are busy) low utilization zone (II) Proposition. All HF schedules have the same structure which consists in two consecutive zones of high (I) and low (II) utilization. Proof. (2 steps) By contracdiction, no high job appears after zone (II) starts

If we can not worsen any local makespan, the global optimum can not be reached localglobally optimal O1O1 O2O2 O1O1 O2O2

If we can not worsen any local makespan, the global optimum can not be reached localglobally optimal best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

If we can not worsen any local makespan, the global optimum can not be reached. Lower bound on approximation ratio greater than 3/ best solution that does not increase Cmax(O 1 ) O1O1 O2O2 O1O1 O2O2 O1O1 O2O2

Using Game Theory? We propose here a standard approach using Combinatorial Optimization. Cooperative Game Theory may also be usefull, but it assumes that players (organizations) can communicate and form coalitions. The members of the coalitions split the sum of their playoff after the end of the game. We assume here a centralized mechanism and no communication between organizations.

Multi-Organization Load-Balancing 1 Each cluster is running local jobs with Highest First LB = max (pmax,W/nm) 2. Unschedule all jobs that finish after 3LB. 3. Divide them into 2 sets (Ljobs and Hjobs) 4. Sort each set according to the Highest first order 5. Schedule the jobs of Hjobs backwards from 3LB on all possible clusters 6. Then, fill the gaps with Ljobs in a greedy manner

let consider a cluster whose last job finishes before 3LB 3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob Hjob

3LB Ljob

3LB Ljob

Feasibility (insight) Zone (I) 3LB Zone (I) Zone (II)

Sketch of analysis Case 1: x is a small job. Global surface argument Case 2: x is a high job. Much more complicated, see the paper for technical details Proof by contradiction: let us assume that it is not feasible, and call x the first job that does not fit in a cluster.

Guaranty Proposition: 1. The previous algorithm is a 3-approximation (by construction) 2. The bound is tight (asymptotically) Consider the following instance: m clusters, each with 2m-1 processors The first organization has m short jobs requiring each the full machine (duration ) plus m jobs of unit length requiring m processors All the m-1 others own m sequential jobs of unit length

Local HF schedules

Optimal (global) schedule: Cmax* = 1+ 

Multi-organization load-balancing: Cmax=3

Improvement Local schedules Multi-org LB load balance O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 O3O3 O1O1 O2O2 O4O4 O5O5 We add an extra load-balancing procedure O3O3 O1O1 O2O2 O4O4 O5O5 Compact

Some experiments

Link with Game Theory? We propose an approach based on combinatorial optimization Can we use Game theory? players : organizations or users objective: makespan, minsum, mixed Cooperative game theory: assume that players communicate and form coalitions. Non cooperative game theory: key concept is Nash equilibrium which is the situation where the players donot have interest to change their strategy… Price of stability: best Nash equilibrium over the opt. solution stratégie : collaborer ou non; obj. global : min makespan

Conclusion Single unified approach based on multi-objective optimization for taking into account the users’ need or wishes. MOSP - good guaranty for C max,  C i and mixed case remains to be studied MUSP - « bad » guaranty but we can not obtain better with mow cost algorithms

Thanks for attention Do you have any questions?