Enabling Grids for E-sciencE www.eu-egee.org Grid Computing School, Rio de Janeiro, 2006 Introduction to the EGEE project and to the EGEE Grid Peter Kacsuk.

Slides:



Advertisements
Similar presentations
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Advertisements

FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
High Performance Computing Course Notes Grid Computing.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
E-science grid facility for Europe and Latin America gLite Overview User and Site Admin Tutorial Riccardo Bruno – INFN Sez. Catania Dublin.
Workload Management Massimo Sgaravatto INFN Padova.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
INFSO-RI Enabling Grids for E-sciencE Comparison of LCG-2 and gLite Author E.Slabospitskaya Location IHEP.
1 Introduction to EGEE-II Antonio Fuentes Tutorial Grid Madrid, May 2007 RedIRIS/Red.es (Slices of Bob Jone, Director of EGEE-II.
FESR Consorzio COMETA Grid Introduction and gLite Overview Corso di formazione sul Calcolo Parallelo ad Alte Prestazioni (edizione.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Configuring and Maintaining EGEE Production.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
Computational grids and grids projects DSS,
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
L ABORATÓRIO DE INSTRUMENTAÇÃO EM FÍSICA EXPERIMENTAL DE PARTÍCULAS Enabling Grids for E-sciencE Grid Computing: Running your Jobs around the World.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Information System on gLite middleware Vincent.
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Security and Job Management.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE II: an eInfrastructure for Europe and.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
CLRC and the European DataGrid Middleware Information and Monitoring Services The current information service is built on the hierarchical database OpenLDAP.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA gLite Information System Pedro Rausch IF.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
INFSO-RI Enabling Grids for E-sciencE EGEE is a project funded by the European Union under contract INFSO-RI Grid Accounting.
EGEE is a project funded by the European Union under contract INFSO-RI Practical approaches to Grid workload management in the EGEE project Massimo.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
INFSO-RI Enabling Grids for E-sciencE Αthanasia Asiki Computing Systems Laboratory, National Technical.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Alexandre Duarte CERN IT-GD-OPS UFCG LSD 1st EELA Grid School.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Grid2Win: Porting of gLite middleware to.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Information System Claudio Cherubino.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid2Win : gLite for Microsoft Windows Roberto.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Segundo Taller Latino Americano de Computación GRID – Primer Taller Latino Americano de EELA – Primer Tutorial Latino Americano de EELA
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
13th EELA Tutorial, La Antigua, 18-19, October E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA
FESR Trinacria Grid Virtual Laboratory gLite Information System Muoio Annamaria INFN - Catania gLite 3.0 Tutorial Trigrid Catania,
EGEE-II INFSO-RI Enabling Grids for E-sciencE Overview of gLite, the EGEE middleware Mike Mineter Training Outreach Education National.
E-science grid facility for Europe and Latin America Updates on Information System Annamaria Muoio - INFN Tutorials for trainers 01/07/2008.
INFSO-RI Enabling Grids for E-sciencE gLite Overview Riccardo Bruno, Salvatore Scifo gLite - Tutorial Catania, dd.mm.yyyy.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overveiw of the gLite middleware Yaodong Cheng
Grid Computing: Running your Jobs around the World
Peter Kacsuk – Sipos Gergely MTA SZTAKI
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
Grid2Win: Porting of gLite middleware to Windows XP platform
Grid Services Ouafa Bentaleb CERIST, Algeria
EGEE Middleware: gLite Information Systems (IS)
Grid Introduction and gLite Overview
gLite Grid Services Riccardo Bruno
Overview of gLite Middleware
Information Services Claudio Cherubino INFN Catania Bologna
Presentation transcript:

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Introduction to the EGEE project and to the EGEE Grid Peter Kacsuk and Gergely Sipos MTA SZTAKI

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Acknowledgement This tutorial is based on the work of many people: –Fabrizio Gagliardi, Flavia Donno, Peter Kunszt –Riccardo Bruno, Marc-Elian Bégin, Martin Polak –the EDG developer team –the EDG training team –the NeSC training team –the SZTAKI training team

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Contents Introduction to EGEE-I and EGEE-II Introduction to the EGEE middleware model Evolution of middleware Services in LCG Services in glite

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 What is EGEE ? (I) EGEE (Enabling Grids for E- sciencE) is a seamless Grid infrastructure for the support of scientific research, which: –Integrates current national, regional and thematic Grid efforts, especially in HEP (High Energy Physics) –Provides researchers in academia and industry with round-the-clock access to major computing resources, independent of geographic location Applications Geant network Grid infrastructure

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 What is EGEE ? (II) Main features of the EGEE-I project: 70 leading institutions in 27 countries, federated in regional Grids 32 M Euros EU funding (2004-5), O(100 M) total budget Aiming for a combined capacity of over 20’000 CPUs (the largest international Grid infrastructure ever assembled) ~ 300 dedicated staff

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 EGEE Community

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 EGEE-II EGEE-II proposal submitted to the EU –On 8 September 2005 –Started 1 April 2006 Natural continuation of EGEE –Emphasis on providing an infrastructure for e-Science  increased support for applications  increased multidisciplinary Grid infrastructure  more involvement from Industry –Expanded consortium  > 90 partners in 32 countries (Non-European partners in USA, Korea and Taiwan)  Related projects  world-wide Grid infrastructure  increased international collaboration

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 EGEE Infrastructure Scale > 180 sites in 39 countries > CPUs > 5 PB storage > concurrent jobs per day > 60 Virtual Organisations Country participating in EGEE

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 EGEE Applications >20 applications from 7 domains –High Energy Physics –Biomedicine –Earth Sciences –Computational Chemistry –Astronomy –Geo-Physics –Financial Simulation Further applications in evaluation Applications now moving from testing to routine and daily usage

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Related projects

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Foundation Grid Middleware Security model and infrastructure Computing (CE) & Storage Elements (SE) Accounting Information providers and monitoring Applications Higher-Level Grid Services Workload Management Replica Management Visualization Workflows Grid economies etc.  Provide specific solutions for supported applications  Host services from other projects  More rapid changes than Foundation Grid Middleware  Deployed as application software using procedure provided by grid operations  Application independent  Evaluate/adhere to new stds  Emphasis on robustness/stability over new functionality  Deployed as a software distribution by grid operations Middleware in EGEE

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite Middleware Many VOs need sharing of resources through services Accessing Allocating Monitoring Accounting Grid Middleware - Layer between services and physical resources gLite - Lightweight Middleware for Grid Computing

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite - Background Other Grid Projects: Global Grid Forum - GGF Open Grid Services Architecture – OGSA EU DataGrid AliEn Globus Condor … LCG: Globus 2 basedWeb services based gLite-3gLite-1LCG-2LCG-1

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, LCG-2 To understand the evolution of gLite –Let's start with LCG and its terminology

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 What is LHC Grid (LCG)? LHC stands for Large Hadron Collider to be built by CERN The LHC will be put in operation in 2007 with many experiments collecting 5-6 PetaB data per year The LHC Grid was built by CERN in order to provide storage and computing capacity for the process of this huge data set The LHC Grid current version is called LCG-2 It was built based on the sw developed by the European DataGrid project and by the Gryphin US project LCG-2 was the first EGEE infrastructure

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro,

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 The LCG-2 Architecture Collective Services Information & Monitoring Replica Manager Grid Scheduler Local Application Local Database Underlying Grid Services Computing Element Services Authorization Authentication & Accounting Replica Catalog Storage Element Services Database Services Fabric services Configuration Management Configuration Management Node Installation & Management Node Installation & Management Monitoring and Fault Tolerance Monitoring and Fault Tolerance Resource Management Fabric Storage Management Fabric Storage Management Grid Fabric Local Computing Grid Grid Application Layer Data Management Job Management Metadata Management Logging & Book- keeping

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, User Interface The initial point of access to the LCG-2 Grid is the User Interface This is a machine where –LCG users have a personal account –The user’s certificate is installed The UI is the gateway to Grid services It provides a Command Line Interface to perform the following basic Grid operations: –list all the resources suitable to execute a given job –replicate and copy files –submit a job for execution on a Computing Element –show the status of one or more submitted jobs –retrieve the output of one or more finished jobs –cancel one or more jobs One or more UIs are available at each site part of the LCG-2 Grid

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, CPU:PIV RAM:2GB OS:Linux … Grid Gate node gatekeeper infoService CPU:PIV RAM:2GB OS:Linux CPU:PIV RAM:2GB OS:Linux CPU:PIV RAM:2GB OS:Linux Batch server A CE consist of homogeneous worker nodes Computing Element: entry point into a queue of a batch system  information associated with a computing element is limited only to information relevant to the queue  Resource details relates to the system Computing Element

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Computing Element (CE) Defined as a Grid batch Queue and identified by a pair : / Several queues defined for the same hostname are considered different CEs. For example: adc0015.cern.ch:2119/jobmanager-lcgpbs-long adc0015.cern.ch:2119/jobmanager-lcgpbs-short A Computing Element is built on a homogeneous farm of computing nodes (called Worker Nodes) One node acts as a Grid Gate (GG) or front-end to the Grid and runs: –a Globus gatekeeper –the Globus GRAM (Globus Resource Allocation Manager) –the master server of a Local Resource Management System that can be:  PBS, LSF or Condor –a local Logging and Bookkeeping server Each LCG-2 site runs at least one CE and a farm of WNs behind it.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Storage Element (SE) A Storage Element (SE) provides uniform access and services to large storage spaces. Each site includes at least one SE They use two protocols: – GSIFTP for file transfer – Remote File Input/Output (RFIO) for file access

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Storage Resource Management (SRM) Data are stored on disk pool servers or Mass Storage Systems storage resource management needs to take into account Transparent access to files (migration to/from disk pool) Space reservation File status notification Life time management SRM (Storage Resource Manager) takes care of all these details SRM is a Grid Service that takes care of local storage interaction and provides a Grid interface to outside world

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Information System (IS) The Information System (IS) provides information about the LCG-2 Grid resources and their status The current IS is based on LDAP (Lightweight Directory Access Protocol): a directory service infrastructure which is a specialized database optimized for reading, browsing and searching information. the LDAP schema used in LCG-2 implements the GLUE (Grid Laboratory for a Uniform Environment) Schema

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Information System (IS) The IS is a hierarchical system with 3 levels from bottom up: – GRIS (Grid Resource Information Servers) level (CE and SE level) – Grid Index Information Server (GIIS) level (site level) – Top, centralized level (Grid level) the Globus Monitoring and Discovery Service (MDS) mechanism has been adopted at the GRIS level The other two levels use the Berkeley DB Information Index (BDII) mechanism

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 LCG-2 hierarchical Info system BDII: Berkley DB Information Index GIIS: Grid Index Information Server GRIS: Grid Resource Information Server CE: Computing Element SE: Storage Element

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 How to collect and store information? All services are allowed to enter information into the IS The BDII at the top –queries every GIIS in every 2 min and –acts as a cache storing information about the Grid status in its LDAP database The BDII at the GIIS –collects info from every GRIS in every 2 min and –acts as a cache storing information about the site status in its LDAP database The GRIS updates information according to the MDS protocol

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 How to obtain Information? All users can browse the catalogues To obtain the information the client should: –Ask BDII about possible GIIS/GRIS –Directly query GIIS/GRIS –Or use BDII cache The IS scales to ~1000 sites (MDS much less: ~100)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Data Management In LCG, the data files are replicated: on a temporary basis, to many different sites (depending on) where the data is needed. The users or applications do not need to know where the data is located, they use logical files names the Data Management services are responsible for locating and accessing the data.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Main Logical Machine Types (Services) in LCG-2 User Interface (UI) Information Service (IS) Computing Element (CE) –Frontend Node –Worker Nodes (WN) Storage Element (SE) Replica Catalog (RC,RLS) Resource Broker (RB)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, Job Management The user interacts with the Grid via a Workload Management System (WMS) The Goal of WMS is the distributed scheduling and resource management in a Grid environment. What does it allow Grid users to do? –To submit their jobs –To execute them on the “best resources” The WMS tries to optimize the usage of resources –To get information about their status –To retrieve their output

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro,

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – SOA Services Functions exposed as services with Well-Defined Self-Contained Independent Message Based Interface Messaging Service interaction by messages having a common messaging infrastructure SOAP (Web Services) – Std Protocol to manage Messaging among Services WSDL - A language that exposes the service interface.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite 3.0 – Service Decomposition 5 High level services + CLI & API Legend: Available Soon Available

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Security Services

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Security Services Authentication Identify entities (users, systems and services) when establishing a context for message exchange (Who are you?). Aim - Provide a Credential having a universal value that works for many purposes across many infrastructures, communities, VOs and projects. gLite uses: the PKI (X.509) infrastructure using CAs as thrusted third parties. gLite uses: MyProxy ( extended by VOMS. Trust domain: The set of all EGEE CAs is our Trust Domain.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Information and Monitoring Services Information services are vital low level components of Grids.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Information and Monitoring Services R-GMA: provides a uniform method to access and publish distributed information and monitoring data –Used for job and infrastructure monitoring in gLite 3.0 Working to add authorization Service Discovery: –Provides a standard set of methods for locating Grid services –Currently supports R-GMA, BDII and XML files as backends –Will add local cache of information –Used by some DM and WMS components in gLite 3.0

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services Computing Element Service that represents the computing resource that is responsible of the job management: (submission, control, etc.) The CE may be used by a Generic Client: an end-user interacting directly with the Computing Element, or the Workload Manager that submits a given job to an appropriate CE found by a matchmaking process. Two job submission models (accordingly to user requests and site policies) : PUSH (Eager Scheduling) (jobs pushed to CE), PULL (Lazy Scheduling) ( jobs coming from WMS when CE has free slots ) CE must also provide information describing itself. CE responsible to collect accounting information.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services Workload Management WMS is a set of middleware components responsible of distribution and management of jobs across Grid resources. There are two core components of the WMS: WM: accepts and satisfy requests for job management. Matchmaking is the process of assigning the best available resource. L&B: keeps track of job execution in terms of events: (Submitted, Running, Done,...)

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services Workload Management –WMS helps the user accessing computing resources  Resource brokering, management of job input/output,... –LCG-RB: GT2 + Condor-G  To be replaced when the gLite WMS proves reliability –gLite WMS: Web service (WMProxy) + Condor-G  Management of complex workflows (DAGs) and compound jobs bulk submission and shared input sandboxes support for input files on different servers (scattered sandboxes)  Support for shallow resubmission of jobs  Supports collection of information from BDII, R-GMA and CEMon  Support for parallel jobs (MPI) when the home dir is not shared  Deployed for the first time with gLite 3.0

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services Accounting Accumulates information about the resource usage done by users or groups of users (VOs). Information on Grid Services/Resources needs sensors (Resource Metering, Metering Abstraction Layer, Usage Records). Records are collected by the Accounting System (Queries: Users, Groups, Resource) Grid services should register themselves with a pricing service when accounting is used for billing purposes.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Job Management Services Job Provenance Logging and Bookkeeping service –Tracks jobs during their lifetime (in terms of events) –LBProxy for fast access –L&B API and CLI to query jobs –Support for “CE reputability ranking“: maintains recent statistics of job failures at CE's and feeds back to WMS to aid planning Job Provenance: stores long term job information –Supports job rerun –If deployed will also help unloading the L&B –Not yet certified in gLite 3.0.

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, gLite – Data Services

Enabling Grids for E-sciencE Grid Computing School, Rio de Janeiro, 2006 Summary EGEE-II will continue to build a large Grid infrastructure LCG-2 and gLite 3.0 are complete middleware stacks: –security infrastructure –information system and monitoring –workload management –data management gLite is available on the production infrastructure Development is continuing to provide –increased robustness, –usability and –functionality