INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org VO Box Meeting: Summary & Observations C. Loomis (LAL-Orsay) Grid Deployment Board Meeting.

Slides:



Advertisements
Similar presentations
Overview of local security issues in Campus Grid environments Bruce Beckles University of Cambridge Computing Service.
Advertisements

Storage Issues: the experiments’ perspective Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 9 September 2008.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
INFSO-RI Enabling Grids for E-sciencE Application Demonstrations C. Loomis, J. Moscicki, J. Montagnat EGEE European Review (CERN)
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Summary of issues and questions raised. FTS workshop for experiment integrators Summary of use  Generally positive response on current state!  Now the.
Cloud Computing for the Enterprise November 18th, This work is licensed under a Creative Commons.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
SOS EGEE ‘06 GGF Security Auditing Service: Draft Architecture Brian Tierney Dan Gunter Lawrence Berkeley National Laboratory Marty Humphrey University.
SAMANVITHA RAMAYANAM 18 TH FEBRUARY 2010 CPE 691 LAYERED APPLICATION.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Simply monitor a grid site with Nagios J.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks C. Loomis (CNRS/LAL) M.-E. Bégin (SixSq.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
INFSO-RI Enabling Grids for E-sciencE VO BOX Summary Conclusions from Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27 -
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks David Kelsey RAL/STFC,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The GILDA training infrastructure.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
Rutherford Appleton Lab, UK VOBox Considerations from GridPP. GridPP DTeam Meeting. Wed Sep 13 th 2005.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
VO Box Issues Summary of concerns expressed following publication of Jeff’s slides Ian Bird GDB, Bologna, 12 Oct 2005 (not necessarily the opinion of)
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Julia Andreeva on behalf of the MND section MND review.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Nanbor Wang, Balamurali Ananthan Tech-X Corporation Gerald Gieraltowski, Edward May, Alexandre Vaniachine Argonne National Laboratory 2. ARCHITECTURE GSIMF:
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
INFSO-RI Enabling Grids for E-sciencE gLite Test and Certification Effort Nick Thackray CERN.
Enabling Grids for E-sciencE Experience Supporting the Integration of LHC Experiments Computing Systems with the LCG Middleware Simone.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Placeholder ES 1 CERN IT EGI Technical Forum, Experiment Support group AAI usage, issues and wishes for WLCG Maarten Litmaath CERN.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
EGEE-II INFSO-RI Enabling Grids for E-sciencE WLCG File Transfer Service Sophie Lemaitre – Gavin Mccance Joint EGEE and OSG Workshop.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
INFSO-RI Enabling Grids for E-sciencE FTS Administrators Tutorial for Tier-2s Paolo Badino
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Nagios Grid Monitor E. Imamagic, SRCE OAT.
Jean-Philippe Baud, IT-GD, CERN November 2007
StoRM: a SRM solution for disk based storage systems
StratusLab Final Periodic Review
StratusLab Final Periodic Review
THE STEPS TO MANAGE THE GRID
Network Requirements Javier Orellana
Leigh Grundhoefer Indiana University
Presentation transcript:

INFSO-RI Enabling Grids for E-sciencE VO Box Meeting: Summary & Observations C. Loomis (LAL-Orsay) Grid Deployment Board Meeting (CERN) 8 February 2006

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Outline Goals Presentations Motivations for VO Box VO Box Services Observations Conclusions

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Goals Understand how the experiments are using the resources at a site and how they interact with various services located at the site and elsewhere. In particular, for VO Box services: –Understand the services which run on the VO boxes, their interactions with other grid & non-grid services, and their operational implications. –Determine which aspects of these services could be provided by common grid services (either extended or new services).

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Meeting Information available from agenda page: – –Draft minutes. –Draft sequence diagrams for use cases. Participants: –ALICE, ATLAS, LHCb –Grid deployment group –6 Tier 1 Centers S. Bagnasco (ALICE) M. Branco (ATLAS) S. Campana (CERN IT) F. Carminati (ALICE) S. Gabriel (FZK) P. Girard (CC-IN2P3) C. Loomis (LAL, Chair) G. Merino (PIC) D. Salomoni (CNAF) M. Schulz (CERN IT-GD) J. Templon (NIKHEF) S. Traylen (RAL) A. Tsaregorodtsev (LHCb)

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Presentations ATLAS & LHCb –Similar architectures §Persistent state in database §Agents do work based on that state –Asynchronous data transfers (both) –Messaging & Job mgt. (LHCb) ALICE –Experiment interfaces to computing, storage & software –Allows “pull” model but using standard services CMS –Data mgt. (PhEDEx)

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Motivations Experiments will need application-level services to provide high-level functionality on top of middleware services. VO Box motivations: –Distributed services: load balancing, reliability, availability –Better performance: optimized requests, lower latency –Easier integration: other grids, application services Needed in short-term to overcome deficiencies in the middleware. Longer-term needs further discussion as deficiencies are fixed.

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Services Necessary –Interactive login (gsissh) –Proxy renewal utilities –Standard grid client tools –Access to shared experiment software area Unnecessary –Gatekeeper –GridFTP Limited, well-defined network access from –External VO-services or users –Jobs running on worker nodes

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Credential Handling The handling of user and service credentials by application-level services raises a couple of policy issues. Application-level service credentials –Use of host certificates by services. –Obtaining service certificates for the services. Proxy handling implies VO “superuser” –Alters significantly the grid trust model –Particularly problematic if user is member of multiple VOs –ACLs (?) could separate “control” from “impersonation”

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Security Infrastructure Need clear description of grid security model, along with standard implementations and best practices. Delegated credentials –Copy of proxy is not a delegated one. –Standard code, interface needed Attribute certificates –VOMS-like tickets for application services –Split into API to make available to applications Integration of MyProxy servers –Finding location of servers (embed in proxy?) –Controlling configuration of servers Certification of new implementations –Large costs, should be last resort option

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Group Credentials Multiple people using a single credential –Practical for large productions where there are few users –Less adapted for analysis phase with large number of users –Raises accounting issues, especially with fabric-level services –No strong need of this from the applications Eliminate use of shared credentials –Possible for each experiment to do so –Does typically complicate the architecture and implementation User switching –Can avoid reduce overall scheduling costs for set of jobs –Significantly complicates accounting at fabric level –On balance, a weak motivation for this functionality

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI App. Service Framework Generic, secure service container –Would move security management back into middleware –Provide standard mechanism for controlling app. services –Requires significant development –Not clear single framework would satisfy all needs –Not clear if all security concerns are solved Application services as special jobs –Would need infrastructure for specifying special requirements (inbound network access, unlimited CPU, etc.) –How would a persistent state be handled? (Note: same problem exists for generic middleware.) –Would high-priority, low-latency scheduling help? (E.g. perhaps for software installation.)

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI File Transfer Issues Need reliable system which permits transfers to/from any storage element in grid. FTS –Not ideal for end-user (complicated cfg., limited reach) –Serious mismatch in security models §Uses new proxies from MyProxy server, not renewed proxies §Having passwords floating around grid compromises security “VO-plugin” for services? –Do need pre-, post-processing of transfers. –Many questions with plug-in model: §where and how are they run? §with what credentials?

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Messaging –Applications must be able to contact exterior services –Needs to be reliable and secure –Can be used for logging, monitoring, service requests,... Notification –Middleware cannot be “closed system”; need to interact with non- middleware services –Need to perform application-specific tasks based on state of grid –E.g.: Registration of file after transfer, validation of file,... Common solution needed: –R-GMA (?) –Dedicated system for messaging (?)

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Outbound Network Access Must define a grid-wide policy on outbound network access. Outbound access not guaranteed: –Complicates significantly the implementation of services –Must provide service to bridge firewall for messages –End up reinventing NAT functionality –Simpler for resource center (maybe...) Outbound access guaranteed: –Large bandwidth not necessarily provided (data transfers should go through appropriate services) –No need to modify applications for contacting services –NAT could become bottleneck

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Common Software Common services –Commitment from experiments, requirements and usage –Realization that switching is a cost for experiments –Need to have faster development/deployment cycle Grid service APIs –Standard APIs for hiding differences between grids –Reduced dependencies between services –Evaluation of new protocols (e.g. xrootd) §provides better usability? §worry about having multiple protocols for same service §possible integration with SRM Overall need pragmatic discussions to push toward convergence.

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Responsibilities System administrators –Only responsible for maintaining OS and standard grid services installed on VO Boxes (e.g. gsissh). –Ensure that releases are upgraded in a timely fashion. Virtual organizations (experiments) –Responsible for installation, maintainance, operation of VO- specific services. –Maintain well-defined releases of VO-specific services and ensure uniform installation of those releases. Monitoring –Generic SFT test for grid services on VO Box. –SFT test for VO-specific services.

VO Box Summary – C. Loomis – 8 Feb Enabling Grids for E-sciencE INFSO-RI Conclusions The VO Box discussion raised important policy and technical issues directly and indirectly related to the VO box services. Need further discussions: –Finalize report from group with suggested list of developments and actions to be taken. –F2F or phone conference in early March for this. Need for longer-term discussions of issued raised: –Inclusion of other applications in the discussion –Periodic re-evaluation of application-level services at site –Integration with TCG?