Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002

Slides:



Advertisements
Similar presentations
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Advertisements

CERN Summary Ian Bird eInfrastructure Workshop 9 December, 2003.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
EGI: SA1 Operations John Gordon EGEE09 Barcelona September 2009.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
CERN Deploying the LHC Computing Grid The LCG Project Ian Bird IT Division, CERN CHEP March 2003.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
CERN LCG Deployment Overview Ian Bird CERN IT/GD LHCC Comprehensive Review November 2003.
EGEE is a project funded by the European Union under contract IST Middleware Planning for LCG/EGEE Bob Jones EGEE Technical Director e-Science.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
LCG Generator Meeting, December 11 th 2003 Introduction to the LCG Generator Monthly Meeting.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
EGEE is a project funded by the European Union under contract IST EGEE Services Ian Bird SA1 Manager Cork Meeting, April
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks SA1: Grid Operations Maite Barroso (CERN)
EGEE MiddlewareLCG Internal review18 November EGEE Middleware Activities Overview Frédéric Hemmer EGEE Middleware Manager EGEE is proposed as.
INFSO-RI Enabling Grids for E-sciencE EGEE SA1 in EGEE-II – Overview Ian Bird IT Department CERN, Switzerland EGEE.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Ian Bird GDB CERN, 9 th September Sept 2015
Ian Bird LCG Project Leader On the transition to EGI – Requirements from WLCG WLCG Workshop 24 th April 2008.
LCG CERN David Foster LCG WP4 Meeting 20 th June 2002 LCG Project Status WP4 Meeting Presentation David Foster IT/LCG 20 June 2002.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
CERN LCG Deployment Overview Ian Bird CERN IT/GD LCG Internal Review November 2003.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
EGEE is a project funded by the European Union under contract IST Roles & Responsibilities Ian Bird SA1 Manager Cork Meeting, April 2004.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
LCG Project Organisation Requirements and Monitoring LHCC Comprehensive Review November 24, 2003 Matthias Kasemann Software + Computing Committee (SC2)
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Dave Newbold, University of Bristol14/8/2001 Testbed 1 What is it? First deployment of DataGrid middleware tools The place where we find out if it all.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
15-Jun-04D.P.Kelsey, LCG-GDB-Security1 LCG/GDB Security Update (Report from the LCG Security Group) CERN 15 June 2004 David Kelsey CCLRC/RAL, UK
Operations Coordination Team Maria Girone, CERN IT-ES GDB, 11 July 2012.
Bob Jones EGEE Technical Director
Status of Task Forces Ian Bird GDB 8 May 2003.
Regional Operations Centres Core infrastructure Centres
BaBar-Grid Status and Prospects
EGEE Middleware Activities Overview
David Kelsey CCLRC/RAL, UK
DataGrid Quality Assurance
JRA3 Introduction Åke Edlund EGEE Security Head
SA1 Execution Plan Status and Issues
LCG Security Status and Issues
Ian Bird GDB Meeting CERN 9 September 2003
Grid Deployment Area Status Report
LCG/EGEE Incident Response Planning
Grid related projects CERN openlab LCG EDG F.Fluckiger
Testbed Software Test Plan Status
The CCIN2P3 and its role in EGEE/LCG
LCG Operations Centres
Connecting the European Grid Infrastructure to Research Communities
A conceptual model of grid resources and services
Operating the LCG and EGEE Production Grid for HEP
LCG experience in Integrating Grid Toolkits
Leigh Grundhoefer Indiana University
Report on GLUE activities 5th EU-DataGRID Conference
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002 LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002

Why is it relevant to HEPiX? What is LCG? Why is it relevant to HEPiX? Ian.Bird@cern.ch

LCG Project Goals Goal – Prepare and deploy the LHC computing environment applications - tools, frameworks, environment, persistency computing system  global grid service cluster  automated fabric collaborating computer centres  grid CERN-centric analysis  global analysis environment central role of data challenges This is not another grid technology project – it is a grid deployment project Ian.Bird@cern.ch

LCG Level 1 Milestones proposed to LHCC M1.1 - June 03 First Global Grid Service (LCG-1) available M1.2 - June 03 Hybrid Event Store (Persistency Framework) available for general users M1.3a - November 03 LCG-1 reliability and performance targets achieved M1.3b - November 03 Distributed batch production using grid services M1.4 - May 04 Distributed end-user interactive analysis from “Tier 3” centre M1.5 - December 04 “50% prototype” (LCG-3) available M1.6 - March 05 Full Persistency Framework M1.7 - June 05 LHC Global Grid TDR Ian.Bird@cern.ch

LCG and its interactions Experiments Grid Projects HEPCAL PPDG GTA Common Applications Deployment Fabric iVDGL (VDT) GriPhyN Globus GLUE EDG NorduGrid GDB AliEn Regional Centres CERN Ian.Bird@cern.ch

Multi-dimensional problem Regional Centres: Host one or more experiments Different RC’s deploy different grid middleware in existing testbeds Have different operational and security policies Experiments: Use middleware from various grid projects Run at many regional centres Provide applications that rely on specific middleware Grid projects: Provide middleware – that does not often (yet) interoperate Starting to collaborate on common solutions and interoperability  The Deployment area of LCG ties these all together Ian.Bird@cern.ch

Grid Deployment – goals of LCG-1 Production service for Data Challenges in 2H03 & 2004 Focused on batch production work Experience in close collaboration between the Regional Centres Should have wide enough participation to understand the issues, but not too many initially Learn how to maintain and operate a global grid Focus on a production-quality service and all that implies Robustness, fault-tolerance, predictability, and supportability take precedence over functionality But – minimum functionality to be of value This requires: a middleware support group with integration, certification, testing, packaging etc. responsibilities A support structure LCG should be integrated into the sites’ physics computing services – should not be something apart This requires coordination between participating sites in: Policies and collaborative agreements Resource planning and scheduling Operations Support Ian.Bird@cern.ch

What might LCG-1 look like? User’s perspective: - requires Functionality adequate to provide advantage over not using distributed model Straightforward to use – Well defined services Advice on how to use the system Help with problems Failures should be understandable Ability to determine status of jobs and data Sites’ perspective: Integrated into computer centre/IT (inc. security) infrastructures Able to support service Able to allocate and manage resources – local autonomy where needed Overall service perspective: Performance and problem monitoring Accounting Etc. Ian.Bird@cern.ch

Requires agreements, collaboration, and coordination LCG has to build the “virtual computer centre” (= LHC computing environment) With all that is expected from a production service User support Operations group “Account” management Security Fabric management Etc.. Except this is now distributed across many countries and continents Requires agreements, collaboration, and coordination At all levels: management, system managers, user support, etc. Ian.Bird@cern.ch

Grid Operations Centre queries monitoring & alarms corrective actions User Local site Local user support Local operation Call Centre Grid Operations Centre Grid information service Grid operations Grid logging & bookkeeping Virtual Organisation Network Operations Centre Ian.Bird@cern.ch

Deployment Summary Deploy middleware to support essential functionality, but goal is to evolve and incrementally add functionality Added value is to robustify, support and make into a 24x7 production service How? Certification & test procedure – tight feedback to developers must develop support agreements with grid projects to ensure this Define missing functionality – require from providers Provide documentation and training Provide missing operational services Provide a 24x7 Operations and Call Centre Guarantee to respond Single point of contact for a user Make software easy to install – facilitate new centres joining Ian.Bird@cern.ch

LCG Strategy Develop as little as possible Use existing middleware, tools and software Pressure developers to provide missing functionality Negotiate support agreements Leverage existing experience Various data grid projects and testbeds Teragrid, interoperability demonstrations, GGF – production grids area Actively encourage collaboration and coordination Ian.Bird@cern.ch

Grid Deployment Teams – the plan suppliers’ integration teams provide tested releases common applications s/w Trillium - US grid middleware DataGrid middleware HEPiX interests certification, build & distribution LCG infrastructure coordination & operation user support grid operation LCG call centre … fabric operation regional centre A fabric operation regional centre B fabric operation regional centre X fabric operation regional centre Y Ian.Bird@cern.ch

Coordination & Collaboration There are many opportunities for common solutions, which should be actively pursued HICB – JTB, existing & proposed new collaborative activities GLUE Schema definitions & interoperability work Validation and Test Suites Distribution and Meta-Packaging Interoperable distribution and configuration utilities identified as a definite need by all the recent trans-Atlantic demonstration and validation work. Support for this group comes from: LCG, EDG, EDT, Trillium, DataTAG Security czars Already talking to address grid issues GGF Production grids AAA Etc. LCG – grid deployment board, etc. Ian.Bird@cern.ch

Summary of Issues that might be addressed by HEPiX/LCCWS I know many of these are discussed by a plethora of grid projects and offshoots, but remember, more than ever before we all have to work together coherently to make a grid work: Grid operations centre: Teragrid, iVDGL User support – distributed helpdesk/call centre: iVDGL, Teragrid, Nordic grid collabs, GGF production grids area Helpdesk tools Certification process for operating environments Upgrade procedures Configuration management Joint OS version certification Packaging, installation – inc applications User management Security etc. Fabric management (see LCCWS) Etc. Ian.Bird@cern.ch

Proposal HEPiX is already (a lot of) the right people Already, or soon to be, deploying LCG and other grids in their computer centres Keep LCCWS associated with HEPiX Add a Grid Coordination/LCG interest group – like HEPNT or Storage To address themes and issues of common interest Encourage new people to attend Line up specific talks by selected people to address issues and to propose activities to follow on We need to solve the problems – not just talk about them Needs a coordinator & agenda to make sure this happens – Volunteers? Ian.Bird@cern.ch