1 SIMGRID_Scheduler A Simulated Scheduling System for GRID Environment Marcello CASTELLANO, Giacomo PISCITELLI and Nicola SIMEONE Department of Electrical.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Info to Enterprise Migration Implementation Case Study: SBC Corporation Presented to the Crystal Decisions Regional Users Group for the Bay Area on October.
Bellwork If you roll a die, what is the probability that you roll a 2 or an odd number? P(2 or odd) 2. Is this an example of mutually exclusive, overlapping,
Chapter 5: CPU Scheduling
Technische Universität München + Hewlett Packard Laboratories Dynamic Workload Management for Very Large Data Warehouses Juggling Feathers and Bowling.
Libra: An Economy driven Job Scheduling System for Clusters Jahanzeb Sherwani 1, Nosheen Ali 1, Nausheen Lotia 1, Zahra Hayat 1, Rajkumar Buyya 2 1. Lahore.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Processes and Operating Systems
1 Copyright © 2010, Elsevier Inc. All rights Reserved Fig 2.1 Chapter 2.
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
Challenge the future Delft University of Technology Overprovisioning for Performance Consistency in Grids Nezih Yigitbasi and Dick Epema Parallel.
Towards Automating the Configuration of a Distributed Storage System Lauro B. Costa Matei Ripeanu {lauroc, NetSysLab University of British.
Business Transaction Management Software for Application Coordination 1 Business Processes and Coordination.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
0 - 0.
ALGEBRAIC EXPRESSIONS
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
MULTIPLYING MONOMIALS TIMES POLYNOMIALS (DISTRIBUTIVE PROPERTY)
ADDING INTEGERS 1. POS. + POS. = POS. 2. NEG. + NEG. = NEG. 3. POS. + NEG. OR NEG. + POS. SUBTRACT TAKE SIGN OF BIGGER ABSOLUTE VALUE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
Year 6 mental test 10 second questions Numbers and number system Numbers and the number system, fractions, decimals, proportion & probability.
Query optimisation.
Making the System Operational
Condor Project Computer Sciences Department University of Wisconsin-Madison Eager, Lazy, and Just-in-Time.
Patterns and sequences We often need to spot a pattern in order to predict what will happen next. In maths, the correct name for a pattern of numbers is.
ZMQS ZMQS
Integration of EASY5/GSDS for Auto Code Generation and Testing Mike Bingle, Associate Technical Fellow Model Based Processes and Tools Simulation Engineering.
SE-292 High Performance Computing
Configuration management
Sweet Storage SLOs with Frosting Andrew Wang, Shivaram Venkataraman, Sara Alspaugh, Ion Stoica, Randy Katz.
1 Overview Assignment 4: hints Memory management Assignment 3: solution.
Cache and Virtual Memory Replacement Algorithms
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
© S Haughton more than 3?
Squares and Square Root WALK. Solve each problem REVIEW:
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
Past Tense Probe. Past Tense Probe Past Tense Probe – Practice 1.
This, that, these, those Number your paper from 1-10.
Executional Architecture
Chapter 5 Test Review Sections 5-1 through 5-4.
1 First EMRAS II Technical Meeting IAEA Headquarters, Vienna, 19–23 January 2009.
Event 4: Mental Math 7th/8th grade Math Meet ‘11.
Addition 1’s to 20.
25 seconds left…...
Slippery Slope
Test B, 100 Subtraction Facts
Week 1.
Number bonds to 10,
EU 2nd Year Review – Jan – Title – n° 1 WP1 Speaker name (Speaker function and WP ) Presentation address e.g.
INFSO-RI Enabling Grids for E-sciencE Workload Management System and Job Description Language.
We will resume in: 25 Minutes.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Database Administration
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
Workload Management Massimo Sgaravatto INFN Padova.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
Grid Workload Management Massimo Sgaravatto INFN Padova.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Grid Workload Management (WP 1) Massimo Sgaravatto INFN Padova.
Workload Management Workpackage
First proposal for a modification of the GIS schema
Wide Area Workload Management Work Package DATAGRID project
Presentation transcript:

1 SIMGRID_Scheduler A Simulated Scheduling System for GRID Environment Marcello CASTELLANO, Giacomo PISCITELLI and Nicola SIMEONE Department of Electrical and Electronic Engineering, Politecnico di Bari Domenico DIBARI, Eugenio NAPPI INFN Bari

2 Why Simulate? We havent a working GRID system yet We can test scheduling algorithms without occupying physical resources The simulated environment acts like the physical one should do (no bugs and failures) Simulation is faster (we dont have to wait the real execution of the job) Goal Evaluation of super-scheduling algorithms

3 Identifying the Broker/Scheduler System The Grid Architecture WORKLOAD MANAGEMENT SYSTEM GRID INFORMATION SPACE RESOURCES USERS

4 Identifying the Broker/Scheduler System LOGGING & BOOKEEPING GRID INFORMATION SERVICE Workload Management System Architecture RESOURCE INFO Job Status Job Submit Event Resource Info RESOURCE Resource Info RESOURCE Job Submit Job Specifications USER INTERFACE Job specifications specialization Job specifications specialized Job Status BROKER SCHEDULER Job Status JOB SUBMISSION SERVICES Job Status RESOURCE Ref: Integrating GRID tools to build a Computing resource broker: activities of DataGrid WP1

5 Identifying the Broker/Scheduler System Broker/Scheduler System Architecture SCHEDULER Resource Info Job specifications + Resource Specifications Job Status BROKER Job specifications

6 Formalizing the Model

7 Formalizing the Model: How it works SCHEDULER USER SCHEDULED JOBS LOCAL RESOURCES MANAGERS BROKER INFORMATION SPACE MATCHING DISPATCH RESCHEDULE SCHEDULE

8 Formalizing the Model: Configuration Files There are 3 configuration files: –CL.INI (Computing Level configuration file) –RG.INI (Request Generator configuration file) –SG.INI (Simulator configuration file)

9 Formalizing the Model: Computing and Storage Resources %number of computing element: 5 %maxtime,maxjobsinqueue,maxcount,totalnodes,freenodes,maxtotalmemory,maxsinglememory,refresh #1 1000,5,10,5,5,64,256,5 #2 600,10,10,5,5, 64,256,5 #3 800,10,10,5,5, 64,256,5 #4 500,7,10,5,5, 64,256,5 #5 1000,10,10,5,5, 64,256,5 An Example - The Configuration File - CL.INI

10 Formalizing the Model: Users An Example - The Configuration File - RG.INI %Final Time [s]: 3000; %Request Rate (probability of 1 request in 1 sec) [%] : 10; %Max number of executions of the same job requested (maxcount) : 3; %Min memory needed by job [Mb] (maxmaxmem) : 16; %Max memory needed by job [Mb] (maxminmem) : 128; %Max Duration Time for a job [s] (maxduration) : 500; %Max Delay Time for a job [s] maxdelay : 100; Job SpecificationsHow we generate them Number of executions of the same jobUniformly between 1 and maxcount Minimun Memory NeededUniformly between 1 and maxminmem Maximun Memory NeededUniformly between minmem and maxmaxmem Duration (foreseen)Uniformly between 1 and maxduration DelayTimeUniformly between 1 and maxdelay

11 Formalizing the Model: Information Space It reports the state of the resources It is updated by resources every refresh seconds or by an event

12 Formalizing the Model: Scheduler The scheduler calculates the index using the chosen algorithm and puts the user request in the buffer An Example - The Configuration File - SG.INI %Broker Rate (number of request managed by broker in 1 s) [s]: 4; %Scheduling policy 0=EDF 1=FCFS 2=SJF: 0;

13 Formalizing the Model: Broker The Broker gets the first request in the buffer Queries the Information Space Dispatch the Job If it doesnt find the suitable resource, it resubmits the request to the scheduler

14 Metrics Cumulative Value = time deadline i value i Cumulative value [0..100] Missed deadlines [per cent] System Usage [per cent] Average Queue time [per cent of total time] Average Execution time [per cent of total time]

15 Status Report We have implemented 3 scheduling algorithms –FCFS First Come First Served –EDF Earliest Deadline First –SJF Shortest Job First More algorithms will be added We have developed a first level simulation Brokering strategies need to be improved

16 Technologies UML is used to analyze and design the system model and the simulation software C++ is used to develop software so that we can easily : –reuse the same code to implement a real superscheduler –use code developed by others (WP1) Telelogic Tau

17 FCFS SIMULATION RESULTS Cumulative value [0..100] : 79 Missed deadlines [per cent] : 23 System Usage [per cent] : 65 Average Queue time [per cent of total time]: 13 Average Execution time [per cent of total time]: 87 Final Time : 10895

18 EDF SIMULATION RESULTS Cumulative value [0..100] : 85 Missed deadlines [per cent] : 16 System Usage [per cent] : 65 Average Queue time [per cent of total time]: 10 Average Execution time [per cent of total time]: 90 Final Time : 10998

19 SJF SIMULATION RESULTS Cumulative value [0..100] : 85 Missed deadlines [per cent] : 15 System Usage [per cent] : 65 Average Queue time [per cent of total time]: 10 Average Execution time [per cent of total time]: 90 Final Time : 10913

20 Next Step to consider the HEP computing model (Ex. Monarc project) to choose a class of applications (Ex: ALICE experiment apps) to determine an appropriate scheduling algorithm to evaluate the right parameters in order to design and develop a real Community Broker Scheduler using the DATAGRID tools available