ARC’s view on the European (Grid) Middleware Initiative: role, objectives and migration plans Balázs Kónya, Lund University, NorduGrid Collaboration EGEE.

Slides:



Advertisements
Similar presentations
Experiences with GridWay on CRO NGI infrastructure / EGEE User Forum 2009 Experiences with GridWay on CRO NGI infrastructure Emir Imamagic, Srce EGEE User.
Advertisements

EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
VERITAS Software Corp. BUSINESS WITHOUT INTERRUPTION Fredy Nick SE Manager.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
1 port BOSS on Wenjing Wu (IHEP-CC)
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
HPDC 2007 / Grid Infrastructure Monitoring System Based on Nagios Grid Infrastructure Monitoring System Based on Nagios E. Imamagic, D. Dobrenic SRCE HPDC.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Web Services based e-Commerce System Sandy Liu Jodrey School of Computer Science Acadia University July, 2002.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
KnowARC objectives & challenges Balázs Kónya/Lund University Oslo, 1 st KnowARC Conference.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
7. Grid Computing Systems and Resource Management
Easy Access to Grid infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) Dr. Mathias Stuempert (KIT-SCC, Karlsruhe) EGEE User Forum 2008 Clermont-Ferrand,
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
EMI INFSO-RI ARC tools for revision and nightly functional tests Jozef Cernak, Marek Kocan, Eva Cernakova (P. J. Safarik University in Kosice, Kosice,
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
EMI INFSO-RI European Middleware Initiative (EMI) Alberto Di Meglio (CERN)
EMI INFSO-RI EMI Quality Assurance Tools Lorenzo Dini (CERN) SA2.4 Task Leader.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
Overview of the security capabilities of ARC Aleksandr Konstantinov, Weizhong Qiang (presented by Balázs Kónya) NorduGrid collaboration EGEE'09 Conference.
INFSO-RI SA2 ETICS2 first Review Valerio Venturi INFN Bruxelles, 3 April 2009 Infrastructure Support.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
ETICS An Environment for Distributed Software Development in Aerospace Applications SpaceTransfer09 Hannover Messe, April 2009.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
The European KnowARC Project Péter Stefán, NIIF/HUNGARNET/KnowARC TNC2009, June 2009, Malaga.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Implementation of GLUE 2.0 support in the EMI Data Area Elisabetta Ronchieri on behalf of JRA1’s GLUE 2.0 Working Group INFN-CNAF 13 April 2011, EGI User.
NorduGrid's ARC: A Grid Solution for Decentralized Resources Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration ISGC 2005, Taiwan.
Grid Computing with Debian, Globus and ARC Mattias Ellert, Uppsala Universitet (.se) Steffen Möller, Universität zu Lübeck (.de) Anders Wäänänen, Niels.
EMI is partially funded by the European Commission under Grant Agreement RI EMI Status And Plans Laurence Field, CERN Towards an Integrated Information.
Argus EMI Authorization Integration
The Holmes Platform and Applications
Chapter 1 Characterization of Distributed Systems
Oxana Smirnova, Jakob Nielsen (Lund University/CERN)
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
European Middleware Initiative (EMI)
Google Sky.
ARC6 retreat, Umeå, 7-9 November 2018
gLite The EGEE Middleware Distribution
Balázs Kónya, Lund University NorduGrid Technical Coordinator
Information Services Claudio Cherubino INFN Catania Bologna
Presentation transcript:

ARC’s view on the European (Grid) Middleware Initiative: role, objectives and migration plans Balázs Kónya, Lund University, NorduGrid Collaboration EGEE 09 Conference, 24th September 2009, Barcelona

6/14/2016www.nordugrid.org2 Outline  The Advanced Resource Connector –a highly efficient production middleware –with lots of promising new developments...  The European Middleware, the way ARC would like to see it  What can ARC offer for EMI/UMD/EGI?  Conclusion thanks to Oxana Smirnova for the numerous nice slides

6/14/2016www.nordugrid.org3 ARC Today  Reliable, efficient and easy-to handle open source middleware, in production since 2002  Best suits high-throughput distributed computing  Totally independent, very portable code base  GSI-based  Clear separation of cluster and grid layer –No grid layer on the nodes (unless required by users) –Powerful input/output grid data handling by the front-end Dramatically increases CPU utilization Automatically allows for data caching –ARC frontend: all grid related operations –ARC infosys: efficient, reliable, distributed, dynamic  Resource discovery and brokering encapsulated in the client –No single point of failure, ARC clients act as “agents” –Redundancy, mobility, scalability –Based on a powerful client API, ARCLIB  Clear separation of cluster and grid layer –No grid layer on the nodes (unless required by users) –Powerful input/output grid data handling by the front-end Dramatically increases CPU utilization Automatically allows for data caching –ARC frontend: all grid related operations –ARC infosys: efficient, reliable, distributed, dynamic  Resource discovery and brokering encapsulated in the client –No single point of failure, ARC clients act as “agents” –Redundancy, mobility, scalability –Based on a powerful client API, ARCLIB

6/14/2016www.nordugrid.org4 Why our users like ARC  Best suited for shared community resources: –Perfectly portable (RedHat &Co, Debian &Co) –Has interfaces to most major batch systems Any new batch system can easily be plugged in –Minimal intrusiveness, minimal footprint –Orders of magnitude simpler installation and maintenance (comparing to other solutions) Suits a 1-CPU “site” and a cores Top500 monster –Very low Grid failure rate, transparent downtime handling –Versatile, portable CLI: ~14 MB in size, needs no root privileges Does everything from SRM storage listing to brokering ⇒Saves customers’ time and taxpayers’ money

6/14/2016www.nordugrid.org5 ARC enabled Applications  Biophysics  Biochemistry  Computational chemistry  Quantum chemistry –GAMESS  Molecular dynamics –GAUSSIAN, DALTON, MOLDEN  Bioinformatics –Taverna –BLAST, HMMER –eQTL  Language studies  Solid state physics  Computational physics  Mathematical crystallography  Informatics, mathematical logic clause solving  Automatic malware comparison  Medical imaging  Simulation of avalanche dynamics  HEP –ATLAS, IceCube, CMS, ALICE, LHCb tested  CO2 sequestration  Other materials sciences Disclaimer: information shown here is incomplete and was collected by Oxana Smirnova in half an hour by asking people around and googling

6/14/2016www.nordugrid.org6 ARC performance: NDGF in ATLAS production

6/14/2016www.nordugrid.org7 ARC performance: NDGF in ATLAS production 99% efficiency

6/14/2016www.nordugrid.org8 Continous improvements triggered by production Continous improvements triggered by production  Support for LFC file transfers  Dynamic transfer lists  Faster job processing/throughput  Grid-Manager scalability → job processing slaves  Fair-share for job processing  Distributed cache (Nordugrid wide) → Virtual T2  Cache-aware job brokering The results of the community-requested rapid developments are injected into production ARC releases (0.8.x)

6/14/2016www.nordugrid.org9 New developments or... if it ain’t broken, why fix? It is started because....

6/14/2016www.nordugrid.org10 HED ing towards a better ARC  Standard WS interfaces  Better modularity  Extensibility  Self-sufficient core components –minimal 3 rd party dependencies  Portability  Re-designed security  User-friendly  Developer-friendly

6/14/2016www.nordugrid.org11 Fruits of the non-intrusive development  A-REX –The flagship HED service implementing a Computing Element (CE) –JSDL/BES/GLUE2 with ARC extensions –Available as part of the 0.8 production ARC release –Based on the good- old Grid-Manager –Comes with all the production- triggered improvements

6/14/2016www.nordugrid.org12 Fruits of the non-intrusive development  Libarcclient (including libarcdata2) and arc* utils –Implemented in C++ but comes with Python and JAVA wrappers –Modular, plugin-based with powerful existing plugins for pre-WS ARC, gLite, Unicore services, variety of brokering algorithms –Backward compatible with previous ARC servers –Available on Windows, MAC-OSXGrid

6/14/2016www.nordugrid.org13 Fruits of the non-intrusive development  Chelonia distributed storage solution implemented within HED –Global namespace –Supports collections and sub-collection to any depth –Automaatic replication  A-Hash: a replicated database to store metadata;  Librarian: handles –metadata and hierarchy of collections and files –the location of replicas –health data of the Shepherd services  Bartender: high-level interface for the users and for other services  Shepherd: manages storage services, and provides a simple interface for storing files on storage nodes Watch online or see the demo live at EGEE09 in Barcelona

6/14/2016www.nordugrid.org14 Fruits of the non-intrusive development  ISIS –P2P information system backbone –stores service registrations –WS interface to insert/query registration info –a new generation ARC service implemented within HED  Security services –Charon –FruitFly –Shibbridge

6/14/2016www.nordugrid.org15 European middleware landscape: Today

6/14/2016www.nordugrid.org16 European middleware landscape: possible future without EMI

6/14/2016www.nordugrid.org17 Roundabout built with EU funds Harmonization takes us to a better future Only if it is done the right way:  Can’t keep everything  Remove unnecessary obstacles  Quick hacks are not the solution

6/14/2016www.nordugrid.org18 European middleware landscape: Tomorrow

6/14/2016www.nordugrid.org19 How can ARC contribute to European middleware?  Convergence process: –Interoperability experience with gLite, Unicore –Standardization background (GLUE, PGI,...) –Ready-to-use development framework (HED) –Commitment to perform the agreed changes within the ARC software stack  EMI distribution: –Easy-to-deploy components Minimal & reasonable set of external dependencies Distributed through standard channels –Userfriendly components (easy-to-use clients) –Components with low operational & maintenance cost –Components available on multiple platforms major Linuxes, Windows, MAC, Solaris –Highly efficient & reliable services

6/14/2016www.nordugrid.org20 How can ARC contribute to European middleware? Inventory of the proposed ARC components:  Computing: –Grid-Manager, A-REX, libarcclient, ng*, arc*, JURA  Data: –Classic SE, libarcdata2, ng* data, arc* data, Chelonia  Info: –Infoserver, Infoindex, ALIS, ISIS, libarcclient, Grid Monitor  Security: –HED security, arcproxy  Internal: –HED Please note:  Several components on the list are target for phase out  There are components which we consider as the basis for a new commonly developed EMI component  All ARC components are open for harmonization-related development

6/14/2016www.nordugrid.org21 Summary  ARC is a well-respected middleware, proven to be in many aspects superior to other analogous products –Deployed by a number of national infrastructures –Actively developed –Coordinated by NorduGrid (repositories, releases, issue tracking, documentation, support etc)  ARC considers EGI/UMI/EMI as THE opportunity to achieve convergence of the middleware stacks –Standard-based interoperability of components  ARC is prepared for the challenges posed by EMI/UMD –Strong commitments for the needed HARMONIZATION work –Wealth of experience concerning interop. and harmonization –Available framework to carry out the necessary modifications –Can offer components which will bring added value to an European middleware distribution

6/14/2016www.nordugrid.org22 Further information on ARC  Wealth of information on – and  The original ARC ”white paper”: –"Advanced Resource Connector middleware for lightweight computational Grids". M.Ellert et al., Future Generation Computer Systems 23 (2007)  An update containing information about new components: –“ARC middleware: evolution towards standards-based interoperability”, to appear in CHEP09 proceedings  Code: –svn.nordugrid.org -> arc0 and arc1 directories of the nordugrid repository –download.nordugrid.org -> official source and binary packages, external software  The community: –Check out, sing up for the nordugrid-discuss mailing list –Attend some of the Technical Meetings or Conferences