Pour Michel Hello, Tu peux trouver dans ce ppt 3 parties, je te laisse te servir. - L’outil réalisé par GRAAL et pour la communauté de Grid’5000: GRUDU.

Slides:



Advertisements
Similar presentations
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Advertisements

Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
The Moab Grid Suite CSS´ 06 – Bonn – July 28, 2006.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
All-in-one graphical tool for grid middleware management Eddy Caron, Abdelkader Amar, Frédéric Desprez, David Loureiro LIP ENS Lyon, INRIA Rhône-Alpes,
Agreement-based Distributed Resource Management Alain Andrieux Karl Czajkowski.
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Monitoring and performance measurement in Production Grid Environments David Wallom.
Universität Dortmund Robotics Research Institute Information Technology Section Grid Metaschedulers An Overview and Up-to-date Solutions Christian.
GridFlow: Workflow Management for Grid Computing Kavita Shinde.
Distributed Database Management Systems
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
Next Generation of Apache Hadoop MapReduce Arun C. Murthy - Hortonworks Founder and Architect Formerly Architect, MapReduce.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Eddy Caron Join work with Jonathan Rouzaud-Cornabas, Frédéric Desprez, Rajesh Palanichamy and the DIET Team Ecole Normale Supérieure de Lyon AVALON Research.
Chapter 3 Operating Systems Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Fabien Viale 1 Matlab & Scilab Applications to Finance Fabien Viale, Denis Caromel, et al. OASIS Team INRIA -- CNRS - I3S.
All-in-one graphical tool for the management of DIET a GridRPC middleware Eddy Caron, Frédéric Desprez, David Loureiro, Benjamin Depardon, Aurélien Cedeyn.
Connecting OurGrid & GridSAM A Short Overview. Content Goals OurGrid: architecture overview OurGrid: short overview GridSAM: short overview GridSAM: example.
Running Climate Models On The NERC Cluster Grid Using G-Rex Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre Environmental.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
1 Time & Cost Sensitive Data-Intensive Computing on Hybrid Clouds Tekin Bicer David ChiuGagan Agrawal Department of Compute Science and Engineering The.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
Jean-Sébastien Gay LIP ENS Lyon, Université Claude Bernard Lyon 1 INRIA Rhône-Alpes GRAAL Research Team Join work with DIET TEAM D istributed I nteractive.
Week 5 Lecture Distributed Database Management Systems Samuel ConnSamuel Conn, Asst Professor Suggestions for using the Lecture Slides.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
1 Andreea Chis under the guidance of Frédéric Desprez and Eddy Caron Scheduling for a Climate Forecast Application ANR-05-CIGC-11.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
VO-Ganglia Grid Simulator Catalin Dumitrescu, Mike Wilde, Ian Foster Computer Science Department The University of Chicago.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
Enabling Grids for E-sciencE Astronomical data processing workflows on a service-oriented Grid architecture Valeria Manna INAF - SI The.
MROrder: Flexible Job Ordering Optimization for Online MapReduce Workloads School of Computer Engineering Nanyang Technological University 30 th Aug 2013.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
An Overview of Scientific Workflows: Domains & Applications Laboratoire Lorrain de Recherche en Informatique et ses Applications Presented by Khaled Gaaloul.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Scheduling MPI Workflow Applications on Computing Grids Juemin Zhang, Waleed Meleis, and David Kaeli Electrical and Computer Engineering Department, Northeastern.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Xi He Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY THERMAL-AWARE RESOURCE.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
STAR Scheduler Gabriele Carcassi STAR Collaboration.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
© Geodise Project, University of Southampton, Workflow Support for Advanced Grid-Enabled Computing Fenglian Xu *, M.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poland EGEE’08 Conference, Istanbul, 24 Sep.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Next Generation of Apache Hadoop MapReduce Owen
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
The DIET WebBoard. DIET WebBoard 1.The Decrypthon grid 2.Functionalities of the DIET WebBoard 3.Using the DIET WebBoard with a new application.
LEAD-VGrADS Day 1 Notes.
GWE Core Grid Wizard Enterprise (
Job workflow Pre production operations:
Class project by Piyush Ranjan Satapathy & Van Lepham
Wide Area Workload Management Work Package DATAGRID project
Presentation transcript:

Pour Michel Hello, Tu peux trouver dans ce ppt 3 parties, je te laisse te servir. - L’outil réalisé par GRAAL et pour la communauté de Grid’5000: GRUDU - Les expés de Cosmo - Les expés du CERFACS J’ai fait large pour que tu vois aussi le contexte. Si tu as des questions Partie GRUDU: Partie COSMO: Partie CERFACS:

PARTIE GRUDU

DIET Resources Tool To manage grid resources used by the application Currently only used for Grid'5000 platform. Provides several operations to facilitate the access to this platform. Main goals: Displaying the status of the platform (grid/site/job level) Resources allocation through the use of OAR (v1 & v2 are supported) Resources monitoring through the use of the Ganglia (site/job nodes) Deployment management with a GUI for KaDeploy (multiple sites at a time) A terminal emulator (access frontale/site frontale/job main node connection) A file transfer manager (local/remote and synchronization features)

Grid'5000 Reservation Utility for Deployment Usage Web:

GRUDU – Resources Allocation We are able to reserve ressources (OAR1 & OAR2)  Time parameters, date and reservation walltime  Queue  OARGrid sub behaviour/ Script to launch

GRUDU – Monitoring We are able to monitor the status of the grid/site/a job. We are able to get instantaneous/historical data with Ganglia

GRUDU - KaDeploy/JFTP GUI for KaDeploy jobs deployment File Transfert interface (local remote/rsync on Grid'5000)

PARTIE COSMOLOGIE

Large scale experiment: the DIET/Ramses case Validation of the DIET architecture at large scale over different administrative domains in the framework of the LEGO project (ANR CICG05-11) Grid’5000  Goal : Launch the maximum of Ramses execution (Grid based Hydro solver application developed at the DAPNIA/CEA for cosmological simulations)  Stress DIET over a large number of machine and in a large period of time  But also stress Grid'  KaDeploy image with DIET and all the mandatory tools  12 clusters on 7 sites : 979 machines for 48 hours  1 MA, 12 LA, 29 SeDs  1824 processors dedicated to Ramses

Large scale experiment on Grid’5000: Requests submitted via DIET 1824 processors dedicated to Ramses 59 simulations (33 complete, 26 partial) Equivalent to 368 days on 1 processor GalaxyMaker & MoMaF: Web interface for submission of parameter sweep jobs Workload modelisation for scheduling predictions Workflow / data management

Workflow

Modèle temps exécution GalaxyMaker

Modèle taille outputs GalaxyMaker

Modèle temps exécution MoMaF

Large scale experiment: the DIET/Ramses case Use of the DIET DashBoard:  20 seconds for the reservation of 979 nodes  25 minutes for the deployment with KaDeploy  23 seconds for the deployment of the DIET platform Main difficulties:  Disk space on NFS storage  OmniORB not available on Itanium2  Sites not available for deployment

Conclusion  DIET is a grid middleware designed for scheduling application tasks with a hierarchical architecture  The DIET DashBoard provides to DIET users:  A full-featured framework for experiments  An easy way to manage Grid'5000  The DIET Resources Tool provides to the Grid'5000 community a powerful tool dedicated to the interaction with the grid:  Monitoring  Reservation  Deployment  etc.  The DIET Resources tool exists in a stand alone version known as GRUDU dedicated to the Grid'5000 community

Future Work  Web-based version of the DIET DashBoard Used on the Decrypthon project: WebBoard  GUI for client/server applications design  DIET Data Management interface  Support of other Batch Schedulers (such as LoadLeveler or SGE)  Plugin based architecture ‏

PARTIE CLIMAT

Introduction - Context Climate evolution Global Warming Effect Two problems Long term evolution (need super-computer) Climate model parametrization (need numerous simulations)

Introduction - Motivations The project aims to study the parametrization sensitivity of a climate model A better understanding of parametrization will provide better simulations Once good parameters have been found, we will have the possibility to simulate the climate further in the future Need to perform numerous independent simulations The focus of this talk is the minimization of the execution time of these independent simulations

Ocean-Atmosphere scenarios Climate simulation over the 21st century An experiment is composed of several scenarios A scenario is a chain of 1800 monthly simulations (150 years) Input of (n+1)th monthly simulation is the output of the nth one The scenarios are independent. Month 1 Month 2 Month 1799 Month 1800 ….. A scenario

Outline Introduction Framework Scheduling Strategies Experimental Results Conclusion & Future Work

Grid’5000 a multicore architectue 23 Because of technical limitations, no more than one scenario can be executed on a single node All nodes on Grid’5000 are bi-cores or quad-cores New constraint: the size of a group has to be divisible by the number of cores per node of the cluster Possibility to make groups of 12 processors to reduce loss Loss due to this technical difficulty: Few resources: loss between 1% and 13% More resources: loss between 1% and 5% Lot of resources: no more loss E. Caron - Ocean-Atmosphere scheduling within DIET - APDCT-08

Simulations vs Experiments E. Caron - Ocean-Atmosphere scheduling within DIET - APDCT Accuracy of simulations on 7 experiments Bad with all post-processing tasks at the end (20.8% difference) Good if we consider only main-tasks (6.3% difference) Keeping a resource to execute post- processing tasks during experiment suppresses the simulations inaccuracy Positive difference means the real execution was slower than expected

Outline Introduction Framework Scheduling Strategies Experimental Results Conclusion & Future Work

E. Caron - Ocean-Atmosphere scheduling within DIET - APDCT-08 Conclusion Improve performances in a climate prediction application Modelization of the application Proof of usage of Grid’5000 and Diet Scheduling on real application Scheduling done at two levels Groups of processors at cluster level Distribution of scenarios at grid level Real implementation suffered from technical limitations Simulations are quite precise but we need to keep one resource for post-processing tasks