Towards an agent integrated speculative scheduling service L á szl ó Csaba L ő rincz, Attila Ulbert, Tam á s Kozsik, Zolt á n Horv á th ELTE, Department.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
1 Generic logging layer for the distributed computing by Gene Van Buren Valeri Fine Jerome Lauret.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
The Bio-Networking Architecture: An Infrastructure of Autonomic Agents in Pervasive Networks Jun Suzuki netresearch.ics.uci.edu/bionet/
Workload Management Massimo Sgaravatto INFN Padova.
DISTRIBUTED PROCESS IMPLEMENTAION BHAVIN KANSARA.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Distributed Process Implementation Hima Mandava. OUTLINE Logical Model Of Local And Remote Processes Application scenarios Remote Service Remote Execution.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
Chapter 3 Operating Systems Introduction to CS 1 st Semester, 2015 Sanghyun Park.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
AUTOBUILD Build and Deployment Automation Solution.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
Tiziana FerrariNetwork metrics usage for optimization of the Grid1 DataGrid Project Work Package 7 Written by Tiziana Ferrari Presented by Richard Hughes-Jones.
Chapter 1 Introduction. Goal to learn about computers and programming to compile and run your first Java program to recognize compile-time and run-time.
RISICO on the GRID architecture First implementation Mirko D'Andrea, Stefano Dal Pra.
Grid Computing I CONDOR.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Compiled Matlab on Condor: a recipe 30 th October 2007 Clare Giacomantonio.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Contents 1.Introduction, architecture 2.Live demonstration 3.Extensibility.
Chapter 5.4 DISTRIBUTED PROCESS IMPLEMENTAION Prepared by: Karthik V Puttaparthi
INVITATION TO COMPUTER SCIENCE, JAVA VERSION, THIRD EDITION Chapter 6: An Introduction to System Software and Virtual Machines.
Condor Project Computer Sciences Department University of Wisconsin-Madison A Scientist’s Introduction.
Copyright © George Coulouris, Jean Dollimore, Tim Kindberg This material is made available for private study and for direct.
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
EFFECTIVE LOAD-BALANCING VIA MIGRATION AND REPLICATION IN SPATIAL GRIDS ANIRBAN MONDAL KAZUO GODA MASARU KITSUREGAWA INSTITUTE OF INDUSTRIAL SCIENCE UNIVERSITY.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Distributed System Concepts and Architectures 2.3 Services Fall 2011 Student: Fan Bai
ABone Architecture and Operation ABCd — ABone Control Daemon Server for remote EE management On-demand EE initiation and termination Automatic EE restart.
OS, , Part I Operating - System Structures Department of Computer Engineering, PSUWannarat Suntiamorntut.
CGW 04, Stripped replication for the grid environment as a web service1 Stripped replication for the Grid environment as a web service Marek Ciglan, Ondrej.
The EDGeS project receives Community research funding 1 Porting Applications to the EDGeS Infrastructure A comparison of the available methods, APIs, and.
CPSC 171 Introduction to Computer Science System Software and Virtual Machines.
Scheduling MPI Workflow Applications on Computing Grids Juemin Zhang, Waleed Meleis, and David Kaeli Electrical and Computer Engineering Department, Northeastern.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Data Consolidation: A Task Scheduling and Data Migration Technique for Grid Networks Author: P. Kokkinos, K. Christodoulopoulos, A. Kretsis, and E. Varvarigos.
STAR Scheduler Gabriele Carcassi STAR Collaboration.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
Simulation of O2 offline processing – 02/2015 Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture Eugen Mudnić.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
GridWay Overview John-Paul Robinson University of Alabama at Birmingham SURAgrid All-Hands Meeting Washington, D.C. March 15, 2007.
OpenPBS – Distributed Workload Management System
Applying Control Theory to Stream Processing Systems
Introduction to Operating System (OS)
Scalability Tests With CMS, Boss and R-GMA
Simulation use cases for T2 in ALICE
湖南大学-信息科学与工程学院-计算机与科学系
The LHCb Computing Data Challenge DC06
Presentation transcript:

Towards an agent integrated speculative scheduling service L á szl ó Csaba L ő rincz, Attila Ulbert, Tam á s Kozsik, Zolt á n Horv á th ELTE, Department of Programming Languages and Compilers Budapest, Hungary

Motivation Data intensive, parameter-sweep applications Data intensive, parameter-sweep applications Decisions: Decisions: where to put the data (and when) where to put the data (and when) where to execute the job where to execute the job Optimization: Optimization: Replication Replication Resource utilization Resource utilization Response time (get the results ASAP) Response time (get the results ASAP) … Centralized approach vs. decentralized approach Centralized approach vs. decentralized approach More information, better optimization More information, better optimization

Overview Gather information about the behavior of the job and the resources of the grid Gather information about the behavior of the job and the resources of the grid Schedule the job Schedule the job MonitorAnalyzeSchedule log job description

Monitoring The developer do not have to provide the data access pattern descriptions The developer do not have to provide the data access pattern descriptions Information gathered: Information gathered: Data access: I/O operations Data access: I/O operations CPU & memory consumption CPU & memory consumption Possible approaches: Possible approaches: altering the source code, altering the source code, compiler, compiler, run-time system, run-time system, operating system. operating system. Legacy (binary) libraries and applications Legacy (binary) libraries and applications

Monitoring Altering the run-time system: Altering the run-time system: Black-box approach (source code is not needed) Black-box approach (source code is not needed) Transparent Transparent Special shared library -> platform-dependent Special shared library -> platform-dependent File handling: File handling: stdio.h, fcntl.h, unistd.h stdio.h, fcntl.h, unistd.h CPU & memory: CPU & memory: /proc/cpuinfo (BogoMips) /proc/cpuinfo (BogoMips) /proc /proc

The analyzer Processes the log: O(n) Processes the log: O(n) Strategy detection: O(1) Strategy detection: O(1) Direction of file access Direction of file access Block size Block size Timing characteristics Timing characteristics Configurable: Configurable: Detailed or more abstract (compact) description Detailed or more abstract (compact) description Behavior variation Behavior variation Progress detection threshold Progress detection threshold Access & datablock log size Access & datablock log size

Extended job description <datablock min_pos_absolute="0" max_pos_absolute=" " <datablock min_pos_absolute="0" max_pos_absolute=" " min_pos_relative="0" max_pos_relative=" " min_pos_relative="0" max_pos_relative=" " step="5000" size="5000" /> step="5000" size="5000" /> <timing op_time="0" op_mips="0" <timing op_time="0" op_mips="0" avg_op_time=" " avg_op_mips=" " /> avg_op_time=" " avg_op_mips=" " /> </file_out> <datablock min_pos_absolute="0" max_pos_absolute="905001" <datablock min_pos_absolute="0" max_pos_absolute="905001" min_pos_relative="0" max_pos_relative=" " min_pos_relative="0" max_pos_relative=" " step="4999" size="4999" /> step="4999" size="4999" /> <datablock min_pos_absolute="910000" max_pos_absolute=" " <datablock min_pos_absolute="910000" max_pos_absolute=" " min_pos_relative=" " max_pos_relative=" " min_pos_relative=" " max_pos_relative=" " step="4999" size="4999" /> step="4999" size="4999" /> <timing op_time="0" op_mips="0" <timing op_time="0" op_mips="0" avg_op_time=" " avg_op_mips=" " /> avg_op_time=" " avg_op_mips=" " /> </file_in>

Scheduling strategies Scheduler: Scheduler: Choose the Computing Element for the job execution Choose the Computing Element for the job execution Replication commands (for the Replica Manager) Replication commands (for the Replica Manager) Assumptions: Assumptions: A job (single thread) utilizes 100% of a CPU A job (single thread) utilizes 100% of a CPU Files are opened at the beginning of the execution and closed when the job terminates Files are opened at the beginning of the execution and closed when the job terminates Preceding jobs are finished -> input files can be transferred Preceding jobs are finished -> input files can be transferred

Scheduling Based on the job description and the current resource consumption in the grid Based on the job description and the current resource consumption in the grid Job Job description SE2 file3 Replica manager Scheduler CE1CE2 GIS SE3 file4 SE1 file1 file2 replicate schedule 100Mbit 1Gbit 100Mbit 10Mbit

Scheduling – static data feeder Based on the job description and the performance of CEs, estimate the execution time Based on the job description and the performance of CEs, estimate the execution time Output: list of CEs + commands for the Replica Manager Output: list of CEs + commands for the Replica Manager Job execution: Job execution: 1 input files to SEs (download) 2 run the job 3 outputs to the destination (upload)

Scheduling – agent integrated FileAccessAgent: FileAccessAgent: transfer files among Storage and Computing Elements transfer files among Storage and Computing Elements source agent: on a SE source agent: on a SE destination agent: collect the necessary files destination agent: collect the necessary files replicate files of a CE node is possible replicate files of a CE node is possible filtered data transfer (copy relevant file parts) filtered data transfer (copy relevant file parts) take into account the status of multiple jobs take into account the status of multiple jobs JobManagementAgent: JobManagementAgent: coordinate the destination FileAccessAgent coordinate the destination FileAccessAgent start the FAA start the FAA

Scheduling – agent integrated Static DF + FileAccessAgent: Static DF + FileAccessAgent: 1. the user submits the job 2. find the target CE using the job description 3. if the CE does not provide enough disk space, collect the best SE to which input files should be mirrored 4. FileAccessAgent is sent to source SE and destination node 5. the FileAccessAgent of the job copies the input files (to the CE or the SE if necessary/possible) 6. the job is executed 7. the FileAccessAgent copies the output files to the destination node; the next job can be started

Agent integrated – Condor-G Java Agent DEvelopment Framework (JADE): Java Agent DEvelopment Framework (JADE): Java-based Java-based can be distributed across machines can be distributed across machines FIPA communication model (FIPA is an IEEE Computer Society standards organization that promotes agent- based technology and the interoperability of its standards with other technologies) FIPA communication model (FIPA is an IEEE Computer Society standards organization that promotes agent- based technology and the interoperability of its standards with other technologies) support for HTTP-based transport support for HTTP-based transport agents are submitted as an input file of a shell script agents are submitted as an input file of a shell script

Agent integrated – Condor-G universe = vanilla executable = runjade output = jadesend.out error = jadesend.err log = jadesend.log arguments = -host n01 -container a1:A1 transfer_input_files = agent.jar,jade.jar,jadeTools.jar,iiop.jar,commons- codec-1.3.jar WhenTOTransferOutput = ON_EXIT requirements = (machine == "n02") queue

Simulation Extended OptorSim v2.0; CE configurations were extended with MIPS values Extended OptorSim v2.0; CE configurations were extended with MIPS values EDG topology EDG topology static data feeder and agent integrated scheduler static data feeder and agent integrated scheduler single source shortest path searching, 300 MB input file single source shortest path searching, 300 MB input file 4 simulation groups: 4 simulation groups: two different job descriptions two different job descriptions 1/6 and 4/6 of the jobs have behavior description 1/6 and 4/6 of the jobs have behavior description 100, 500, 1000 jobs in a group 100, 500, 1000 jobs in a group

Simulation

Conclusions and future work Optimization based on extended job description Optimization based on extended job description Static and dynamic data feeder strategies; implementation: Hungarian ClusterGrid Static and dynamic data feeder strategies; implementation: Hungarian ClusterGrid Monitoring and analysis will be unified and implemented in the Grid middleware Monitoring and analysis will be unified and implemented in the Grid middleware Refined job descriptions Refined job descriptions Communication patterns for parallel applications Communication patterns for parallel applications More thorough analysis of the scheduling methods: More thorough analysis of the scheduling methods: What happens when things go wrong? What happens when things go wrong? How should it be handled? How should it be handled?