DIRAC in LHCb and beyond Philippe Charpentier (LHCb-CERN) Slides courtesy of A.Tsaregorodtsev BigPanda Workshop, October 21 2013 Click to edit Master title.

Slides:



Advertisements
Similar presentations
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Advertisements

FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
High Performance Computing Course Notes Grid Computing.
DIRAC Distributed Computing Services
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
Workload Management Massimo Sgaravatto INFN Padova.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Pilots 2.0: DIRAC pilots for all the skies Federico Stagni, A.McNab, C.Luzzi, A.Tsaregorodtsev On behalf of the DIRAC consortium and the LHCb collaboration.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
1 Managing distributed computing resources with DIRAC A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille September 2011, NEC’11, Varna.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
BESIII distributed computing and VMDIRAC
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Grid Initiatives for e-Science virtual communities in Europe and Latin America DIRAC TEAM CPPM – CNRS DIRAC Grid Middleware.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
YAN, Tian On behalf of distributed computing group Institute of High Energy Physics (IHEP), CAS, China CHEP-2015, Apr th, OIST, Okinawa.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
PanDA Multi-User Pilot Jobs Maxim Potekhin Brookhaven National Laboratory Open Science Grid WLCG GDB Meeting CERN March 11, 2009.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CHEP 2006, February 2006, Mumbai 1 LHCb use of batch systems A.Tsaregorodtsev, CPPM, Marseille HEPiX 2006, 4 April 2006, Rome.
Managing Data DIRAC Project. Outline  Data management components  Storage Elements  File Catalogs  DIRAC conventions for user data  Data operation.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Introduction to Grid Computing and its components.
Transformation System report Luisa Arrabito 1, Federico Stagni 2 1) LUPM CNRS/IN2P3, France 2) CERN 5 th DIRAC User Workshop 27 th – 29 th May 2015, Ferrara.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
The GridPP DIRAC project DIRAC for non-LHC communities.
DIRAC Pilot Jobs A. Casajus, R. Graciani, A. Tsaregorodtsev for the LHCb DIRAC team Pilot Framework and the DIRAC WMS DIRAC Workload Management System.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
The GridPP DIRAC project DIRAC for non-LHC communities.
1 DIRAC Data Management Components A.Tsaregorodtsev, CPPM, Marseille DIRAC review panel meeting, 15 November 2005, CERN.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
LHCb/DIRAC week A.Tsaregorodtsev, CPPM 7 April 2011.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
1 DIRAC Project Status A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille 10 March, DIRAC Developer meeting.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
1 Building application portals with DIRAC A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille 27 April 2010, Journée LuminyGrid, Marseille.
Multi-community e-Science service connecting grids & clouds R. Graciani 1, V. Méndez 2, T. Fifield 3, A. Tsaregordtsev 4 1 University of Barcelona 2 University.
Distributed Computing Framework A. Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille EGI Webinar, 7 June 2016.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
DIRAC services.
Grid Deployment Board meeting, 8 November 2006, CERN
VMDIRAC status Vanessa HAMAR CC-IN2P3.
Presentation transcript:

DIRAC in LHCb and beyond Philippe Charpentier (LHCb-CERN) Slides courtesy of A.Tsaregorodtsev BigPanda Workshop, October Click to edit Master title style LHC b LHC b GRID SOLUTION TM

Plan 2  Motivation and brief history of the project  DIRAC grid middleware  Resources available to DIRAC users  Communities using DIRAC  DIRAC as a Service  Conclusions

Massive computations: HEP 3  LHC experiments pioneered the massive use of computational grids  10s of PBytes of data per year  100s of thousands CPUs in 100s of centers  100s of users from 100s of institutions  CERN Director General Rolf Heuer about the Higgs discovery: "It was a global effort and it is a global success. The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing.”  Other domains are catching up quickly with the HEP experiments  Life sciences, earth sciences, astrophysics, social sciences, etc

Massive computations 4  Large HEP experiments have dedicated teams of experts to build their computing systems  Largely relying on dedicated grid resources  The computing expertise level in other scientific domains is relatively lower  Grouped around well known applications and scientific portals  New application development to run on the grid is still difficult  Need for convenient tools for small research groups with no local grid gurus.  The experience of the HEP experiment developers can be very useful for the non-HEP users

User problems on the grid 5  Complicated interfaces  Especially for non-computing experts  Frustration with failing resources and middleware  Why my jobs worked yesterday and not today ?  For small communities difficult to organize collective work  Lack of expertise in high level computing tasks  Massive jobs, massive data movement, etc  Difficult to build custom services to orchestrate execution of particular applications  Example: workflow managers  Small communities tend to become larger with time

Large community issues 6  Large user communities (Virtual Organizations) have specific problems  Dealing with heterogeneous resources  Various computing clusters, grids, etc  Dealing with the intracommunity workload management  User group quotas and priorities  Priorities of different activities  Dealing with a variety of applications  Massive data productions  Individual user applications, etc  HEP Experiments are typical examples

DIRAC Grid Solution  LHC experiments, all developed their own middleware to address the above problems  PanDA, AliEn, glideIn WMS, PhEDEx, …  DIRAC is developed originally for the LHCb experiment with the goals:  Integrate all the heterogeneous computing resources available to the community  Provide solution for both WMS and DMS tasks  Minimize human intervention at sites providers of resources  Make the grid convenient for the users:  Simpler intuitive interfaces  Fault tolerance, quicker turnaround of user jobs  Enabling Community policies 7

Towards general purpose middleware  The experience collected with a production grid system of a large HEP experiment is very valuable  Several new experiments expressed interest in using this software relying on its proven in practice utility  In 2009 the core DIRAC development team decided to generalize the software to make it suitable for any user community.  Separate LHCb specific functionality into a set of extensions to the generic core libraries  Introduce new services to make it a complete solution  Support for multiple small groups by a single DIRAC installation  General refurbishing of the code, code management, deployment, documentation, etc  The results of this work are presented in the following 8

DIRAC Workload Management 9

Physicist User EGEE Pilot Director EGI/WLCG Grid NDG Pilot Director NDG Grid EELA Pilot Director GISELA Grid CREAM Pilot Director CREAM CE Matcher Service Production Manager 10

WMS: applying VO policies  In DIRAC both User and Production jobs are treated by the same WMS  Same Task Queue  This allows to apply efficiently policies for the whole VO  Assigning Job Priorities for different groups and activities  Static group priorities are used currently  More powerful scheduler can be plugged in ● demonstrated with MAUI scheduler  The VO policies application in the central Task Queue dictates the use of Multiuser Pilot Agents  Do not know apriori whose job has the highest priority at the moment of the user job matching  DIRAC fully supports this mode of operation  Multiuser Pilots Jobs submitted with a special “pilot” VOMS role  Using glexec on the WNs to track the identity of the payload owner 11

WMS: using heterogeneous resources  Including resources in different grids and standalone clusters is simple with Pilot Jobs  Needs a specialized Pilot Director per resource type  Users just see new sites appearing in the job monitoring 12

Interware 13  DIRAC has all the necessary components to build ad-hoc grid infrastructures interconnecting computing resources of different types. This allows to speak about the DIRAC interware. Grid A Grid B User Community

DIRAC as a resource manager 14

Computing Grids 15  DIRAC was initially developed with the focus on accessing conventional Grid computing resources  WLCG grid resources for the LHCb Collaboration  It fully supports gLite middleware based grids  EGI, GISELA, etc  Using gLite WMS or accessing CE’s directly  OSG  The work is in progress to support ARC middleware based grids  NorduGrid  A successful demonstration was already done  Other types of grids can be supported  As long we have customers needing that

Direct submission to CEs  Using gLite WMS now just as a pilot deployment mechanism  Limited use of brokering features  For jobs with input data the destination site is already chosen  Have to use multiple Resource Brokers because of scalability problems  DIRAC is supporting direct submission to CEs  CREAM CEs or batch clusters through SSH tunnel  Can apply individual site policy  Site chooses how much load it can take (Pull vs Push paradigm)  Direct measurement of the site state watching the pilot status info  This is a general trend  All the LHC experiments declared abandoning eventually gLite WMS 16

Clouds 17  VM scheduler developed for Belle MC production system  Dynamic VM spawning taking Amazon EC2 spot prices and Task Queue state into account  Discarding VMs automatically when no more needed  The DIRAC VM scheduler by means of dedicated VM Directors is interfaced to  OCCI compliant clouds:  OpenStack, OpenNebula  CloudStack  Amazon EC2

Ongoing VMDIRAC work 18 VMDIRAC – multiple cloud broker  Gives a transparent access to multiple clouds with optimized dynamic allocation of Virtual Machines (VM)  Intensive development now  different access methods, VM contextualization, VM scheduling policies  part of the EGI Cloud Task Force activities

Standalone computing clusters 19  Dedicated Pilot Director per group of sites  Off-site Director  Site delegates control to the central service  Site must only define a dedicated local user account  The payload submission through the SSH tunnel  The site can be a single computer or a cluster with a batch system  LSF, BQS, SGE, PBS/Torque, Condor  More to come:  OAR, SLURM, LoadLeveler. etc  The user payload is executed with the owner credentials  No security compromises with respect to external services

Standalone computing clusters 20 Examples:  DIRAC.Yandex.ru  1800 cores  Torque batch system, no grid middleware, access by SSH  Second largest LHCb MC production site  LRZ Computing Center, Munich  SLURM batch system, GRAM5 CE service  Gateway access by GSISSH  Considerable resources for biomed community (work in progress)  Mesocentre Aix-Marseille University  OAR batch system, no grid middleware, access by SSH  Open to multiple communities (work in progress)

BOINC Desktop Grids 21  On the client PC the third party components are installed:  VirtualBox hypervisor  Standard BOINC client  A special BOINC application  Starts a requested VM within the VirtualBox  Passes the Pilot Job to the VM and starts it  Once the Pilot Job starts in the VM, the user PC becomes a normal DIRAC Worker Node  Work on interfacing DIRAC to EDGI resources is in progress

Other types of resources 22  Examples of other types of resources available via DIRAC  LHCb online filter farm  Volunteer grids based on BOINC+virtualization technology  EDGI resources  DIRAC attempts to provide access to all kinds of existing resources and services useful for its users  At the same time it provides its own solutions, e.g. catalogs, storage and computing element, transfer service, etc.  By design services from different providers can be used together in parallel not necessarily replacing one another  Example: use of DFC and LFC together, see below  The final choice of the services to use is left to the user

DIRAC Data Management 23

Data Management components  Storage Elements  gLite/EGI Storage Elements  Standard SRM interface  Gridftp protocol  Need Globus libraries, limited number of platforms  Allow third party transfers between them  Managed by the site managers within EGI SLAs  DIRAC Storage Elements  DISET based components  DIPS (Dirac Secure Protocol)  Does not allow third party transfers  Replication through local cache  Third party transfers will be available in the future  More Storage Elements can be included  (F,SF,HT,BBF)TP servers 24

Data Management components  File Catalogs  LCG File Catalog (LFC)  Part of the EGI middleware  Service provided by the NGI  ORACLE backend  Client tools: command line, Python API  Need Globus libraries  No User Metadata support  DIRAC File Catalog  DISET based components  Part of the DIRAC set of services  Community service  MySQL backend  Client tools: command line, CLI, Python API  Support of the User Metadata  Similar to AMGA metadata service  More Catalogs can be included  LHCb has developed several specific catalogs in the same framework  iRods ? 25

Data Management components  For DIRAC users the use of any Storage Element or File Catalog is transparent  Up to a user community to choose components to use  Different SE types can be mixed together  Several File Catalogs can be used in parallel  Complementary functionality  Redundancy  Users see depending on the DIRAC Configuration  Logical Storage Elements  e.g. DIRAC-USER, M3PEC-disk  Logical File Catalog 26

Interfaces 27

DIRAC: Secure Web Portal 28  Focus on the Web Portal as the main user tool for interactions with the grid  Intuitive desktop application like interface  Ajax, Pylons, ExtJS Javascript library  Monitoring and control of all activities  User registration, proxy upload  User job monitoring and manipulation, downloading results  Data manipulation and downloads  DIRAC Systems configuration and management  Secure access  Standard grid certificates  Fine grained authorization rules

Web Portal: example interfaces 29

Application Web Portals  Specific application portals can be built in the DIRAC Web Portal framework  Community Application Servers  DIRAC RESTful interface  Language neutral  Suitable to use with portals written in Java, PHP, etc  Other interfaces include  Extensive Python API  E.g. used by GANGA user front-end  A rich set of command line tools ( >200 commands ) 30

DIRAC Framework 31

DIRAC Framework 32  DIRAC systems consist of well defined components with clear recipes for developing  Services, agents, clients, databases  Framework allows to easily build these components concentrating on the business logic of the applications  Development environment: Python, MySQL  Using base services for configuration, monitoring, logging, etc  Specific functionality can be provided in many cases as plugin modules, e.g.  Data access policies  Job scheduling policies  All the communications between the distributed components are secure  DISET custom client/service protocol  Focus on efficiency  Control and data transfer communications  X509, GSI security standards  Fine grained authorization rules

DIRAC base services  Redundant Configuration Service  Provides service discovery and setup parameters for all the DIRAC components  Full featured proxy management system  Proxy storage and renewal mechanism  Support for multiuser pilot jobs  System Logging service  Collect essential error messages from all the components  Monitoring service  Monitor the service and agents behavior  Security Logging service  Keep traces of all the service access events 33

Belle II example 34 Distributed DIRAC servers: KEK(Japan), IHEP(China), IFJ(Poland)

Accounting 35  Comprehensive accounting of all the operations  Publication ready quality of the plots  Plotting service can be used by users for there own data

DIRAC Users: large communities 36

LHCb Collaboration 37  Up to 40K concurrent jobs in ~120 distinct sites  Limited by the resources available to LHCb  10 mid-range servers hosting DIRAC central services  Further optimizations to increase the capacity are possible ● Hardware, database optimizations, service load balancing, etc

LHCb Production system  Based on the DIRAC Transformation System  Multiple extensions and custom plugins  Data driven payload generation based on templates  Generating data processing and replication tasks  LHCb specific templates and catalogs 38

Belle II 39 Hideki Miyake, KEK  Belle II, KEK, Japan  DIRAC is chosen as the basis of Computing Model for phase II of the experiment  2GB/s DAQ rate  Combination of the non-grid, grid sites and (commercial) clouds is a requirement  Belle II grid resources  WLCG, OSG grids  KEK Computing Center  Amazon EC2 cloud

Belle II 40  DIRAC Scalability tests Hideki Miyake, KEK

DIRAC dedicated installations 41  ILC/CLIC detector Collaboration  Base production system on DIRAC  MC simulations  DIRAC File Catalog was developed to meet the ILC/CLIC requirements  BES III, IHEP, China  DIRAC is chosen for the phase III  Using DIRAC DMS: File Catalog, Transfer services  CTA  CTA started as FG-DIRAC customer for DIRAC evaluation  Now is using a dedicated installation at PIC, Barcelona  Using complex workflows

42 DIRAC in biomed  Use of computing resources in the biomed grid community  DIRAC instance provided by France-Grilles since June 2012 source: ; Tristan Glatard, CREATIS

43 Tristan Glatard, CREATIS Virtual Imaging Platform  Web portal with robot certificate  File transfers, user/group/application management  Workflow engine  Generate jobs, (re-)submit, monitor, replicate  DIRAC  Resource provisioning, job scheduling  Grid resources  biomed VO LFC  Platform for medical image simulations at CREATIS, Lyon  Example of a combined use of an Application Portal and DIRAC WMS

DIRAC Services 44

DIRAC as a service 45  DIRAC client is easy to install  Part of a usual tutorial  DIRAC services are easy to install but  Needs dedicated hardware for hosting  Configuration, maintenance needs expert manpower  Monitoring computing resources  Small user communities can not afford maintaining dedicated DIRAC services  Still need easy grid access  Large grid infrastructures can provide DIRAC services for their users.

France-Grid DIRAC service 46  Started as a support for user tutorials  Several regional and university campus installations  Complex maintenance  Joint effort to provide France-Grid DIRAC service  Hosted by the CC/IN2P3, Lyon, T1 center  6 virtual servers, MySQL server  Distributed team of service administrators  5 participating universities

Services in CC/Lyon 47  Basic DIRAC services  WMS – managing users jobs  Job submission, monitoring, retrieval  Accounting of the resources consumed  DMS – managing user data basic tasks  Access to standard Grid Storage Elements  SRM, DIRAC  Replicating data between SEs  Providing Simple Storage Element in Lyon  DIRAC File Replica Catalog  DIRAC File Metadata Catalog  Web Portal  REST interface  OAuth2 authentication

FG-DIRAC users 48  France-Grilles users  15 VOs, 88users registered  astro, auger, biomed, esr, euasia, gilda, glast.org, prod.vo.eu-eela.eu, vo.cta.in2p3.fr, vo.formation.idgrilles.fr, vo.france-asia.org, vo.france- grilles.fr, vo.msfg.fr, vo.mcia.org  1 robot user VIP/GateLab Biomed  More VO’s and users can be added as necessary  In production since May 2012  First ~3 millions jobs went through the system  Mostly biomed applications

FG-DIRAC users 49  Heavily used for the grid tutorials  Example of tutorial exercise -->  Using resources of the VO france- formation  Support for applications  Help in porting applications to the grid  Help new communities to try out DIRAC for their production systems  Fermi-LAT  LSST  SuperB

Yesterday FG-DIRAC snapshot 50

Advanced services 51  More advanced services can be made available in CC Lyon  Following the user demands  Transformation Service ( automated job submission )  Replication Service ( automated data replication )  Data integrity inspection  User storage consumption accounting  Support for MPI jobs  Others ?  Hosting Community DIRAC services  Specific services developed for particular communities can be hosted in the same infrastructure

FG-DIRAC server configuration 52  6 virtual servers ( 3 physical hosts )  8 cores, 16 GB RAM, 1TB disk  ccdirac01 – secure services, configuration  ccdirac02 – Workload Management  ccdirac03 – Data Management  ccdirac04 – StorageElement, Accounting, Monitoring  ccdirac05 – Web Portal   ccdirac06 – REST Portal  MySQL server  30GB, 100 connections  Redundant supporting services outside the CC in Lyon  CPPM, CREATIS, etc

DIRAC national services 53 Other national DIRAC installations  GISELA Latin American grid  In production since 2010  Since 2012 GISELA DIRAC services are provided by France- Grid  IberGrid Spanish NGI  Including DIRAC services in order grid infrastructures are under discussion/constrcution:  China, Russia, Italy, …

DIRAC Service 54  The success of the France-Grilles and other DIRAC Services shows that they bring clear advantages to multiple users and whole user communities needing access to distributed computing resources  This is especially well seen during user tutorials  Therefore, we think that this can be a very useful facility for the users of any grid infrastructure project ( NGI ) and of the EGI project as a whole.

Conclusions 55  The computational grids are no more something exotic, they are used in a daily work for various applications  Rich experience with using computational grids in the LHC experiments, as well as the developed tools, can now be shared with users in other experiments and in other scientific domains  DIRAC is providing a framework for building distributed computing systems and a rich set of ready to use services. This is used now in a number of DIRAC service projects on a regional and national levels  Services based on DIRAC technologies can help users to get started in the world of distributed computations and reveal its full potential

Backup slides 56

DIRAC WMS  Jobs are submitted to the DIRAC Central Task Queue with credentials of their owner (VOMS proxy)  Pilot Jobs are submitted by specific Directors to a Grid WMS with credentials of a user with a special Pilot role  The Pilot Job fetches the user job and the job owner’s proxy  The User Job is executed with its owner’s proxy used to access SE, catalogs, etc 57

Advantages for site resources providers  No need for a variety of local batch queues per VO  One long queue per VO would be sufficient  hours queue is a reasonable compromise  Site maintenance requirements  Reduced number of grid jobs  No need for specific VO configuration and accounting on sites  Priorities for various VO groups, activities  User level accounting is optional  In the whole it can lower the site entry threshold  Especially useful for newcomer sites 58

Request Management system  A Request Management System (RMS) to accept and execute asynchronously any kind of operation that can fail  Data upload and registration  Job status and parameter reports  Requests are collected by RMS instances at geographically distributed sites  Extra redundancy in RMS service availability  Requests are forwarded to the central Request Database  For keeping track of the pending requests  For efficient bulk request execution 59

Support for MPI Jobs  MPI Service developed for applications in the EELA/GISELA Grid  Astrophysics, BioMed, Seismology applications  No special MPI support on sites is required  MPI software is installed by Pilot Jobs  Possibility to use distributed file systems, e.g. Parrot  MPI ring usage optimization  Ring reuse for multiple jobs  Lower load on the gLite WMS  Variable ring sizes for different jobs 60

File Catalog Metadata 61  Similar functionality with the AMGA metadata service  But coupled with the replica catalog to boost efficiency  Metadata can be associated with each directory as key:value pairs to describe its contents  Int, Float, String, DateTime value types  Some metadata variables can be declared indices  Those can be used for data selections  Subdirectories are inheriting the metadata of their parents  Data selection with metadata queries. Example:  find. Meta1=Value1 Meta2>3 Meta2<5 Meta3=2,3,4  File metadata can also be defined

DIRAC File Catalog evaluation  ILC/CLIC Collaboration experience  ~1M files  Intensive use of metadata, provenance data 62 Replica information  BES Collaboration made a thorough comparison of DFC vs AMGA  Similar performance  More suitable functionality File search by metadata

Data Management  Based on the Request Management System  Asynchronous data operations  transfers, registration, removal  Two complementary replication mechanisms  Transfer Agent  user data  public network  FTS service  Production data  Private FTS OPN network  Smart pluggable replication strategies 63

DIRAC as a service  Necessity to manage multiple VOs with a single DIRAC installation  Per VO pilot credentials  Per VO accounting  Per VO resources description  Pilot directors are VO aware  Job matching takes pilot VO assignment into account  This the work in progress 64