User Board - Supporting Other Experiments Stephen Burke, RAL pp Glenn Patrick.

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

GridPP9 – 5 February 2004 – Data Management DataGrid is a project funded by the European Union GridPP is funded by PPARC GridPP2: Data and Storage Management.
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
Your university or experiment logo here What is it? What is it for? The Grid.
RAL Tier1: 2001 to 2011 James Thorne GridPP th August 2007.
Stephen Burke - WP8 Status - 9/5/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
Stephen Burke - WP8 Status - 14/2/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
Using the Grid- a users perspective Ivan Hollins University of Birmingham.
29 June 2006 GridSite Andrew McNabwww.gridsite.org VOMS and VOs Andrew McNab University of Manchester.
GLite Status Stephen Burke RAL GridPP 13 - Durham.
Tony Doyle GridPP2 Proposal, BT Meeting, Imperial, 23 July 2003.
The National Grid Service Mike Mineter.
NGS computation services: APIs and.
MyProxy Guy Warner NeSC Training.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
HTCondor and the European Grid Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Andrew McNab - Manchester HEP - 22 April 2002 UK Rollout and Support Plan Aim of this talk is to the answer question “As a site admin, what are the steps.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
UK NGI Operations John Gordon 10 th January 2012.
D. Britton GridPP Status - ProjectMap 22/Feb/06. D. Britton22/Feb/2006GridPP Status GridPP2 ProjectMap.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Grid Interoperability Shootout GridPP and NGS UK e-Science All Hands Meeting, Nottingham 2007 J Jensen, G Stewart, M Viljoen, D Wallom, S Young (contact.
3 June 2004GridPP10Slide 1 GridPP Dissemination Sarah Pearce Dissemination Officer
Lessons for the naïve Grid user Steve Lloyd, Tony Doyle [Origin: 1645–55; < F, fem. of naïf, OF naif natural, instinctive < L nātīvus native ]native.
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Your university or experiment logo here Storage and Data Management - Background Jens Jensen, STFC.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Security Vulnerabilities Linda Cornwall, GridPP15, RAL, 11 th January 2006
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
GLite – An Outsider’s View Stephen Burke RAL. January 31 st 2005gLite overview Introduction A personal view of the current situation –Asked to be provocative!
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Report from GGUS BoF Session at the WLCG.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
GridPP Dirac Service The 4 th Dirac User Workshop May 2014 CERN Janusz Martyniak, Imperial College London.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
ATLAS Experience with GGUS Guido Negri INFN – Milano Italy.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
LCG Storage Accounting John Gordon CCLRC – RAL LCG Grid Deployment Board September 2006.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
Rutherford Appleton Lab, UK VOBox Considerations from GridPP. GridPP DTeam Meeting. Wed Sep 13 th 2005.
Documentation (& User Support) Issues Stephen Burke RAL DB, Imperial, 12 th July 2007.
GGUS summary (4 weeks) VOUserTeamAlarmTotal ALICE4015 ATLAS CMS LHCb Totals
The GridPP DIRAC project DIRAC for non-LHC communities.
WLCG Service Report ~~~ WLCG Management Board, 18 th September
1Maria Dimou- cern-it-gd LCG November 2007 GDB October 2007 VOM(R)S Workshop report Grid Deployment Board.
Your university or experiment logo here User Board Glenn Patrick GridPP20, 11 March 2008.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
The GridPP DIRAC project DIRAC for non-LHC communities.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
II EGEE conference Den Haag November, ROC-CIC status in Italy
GGUS summary (3 weeks) VOUserTeamAlarmTotal ALICE7029 ATLAS CMS LHCb Totals
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
LCG/EGEE Operational Issues Stephen Burke RAL. November 1 st 2004LCG Operations - Issues Introduction List of problems to initiate discussion –A personal.
ATLAS support in LCG.
Pierre Girard ATLAS Visit
The LHCb Computing Data Challenge DC06
Presentation transcript:

User Board - Supporting Other Experiments Stephen Burke, RAL pp Glenn Patrick

9 th September 2009 UB – GridPP 23 2 User Board Meets ~ 3 times a year Representatives of all experiments Allocates CPU and disk at the Tier-1 –And in principle the Tier-2s Also discusses any topics of interest to users –But not a technical forum, mainly about policy –Once operational, the LHC experiments will obviously be the main strategic priority. However, all attempts will also be made to resource the non-LHC experiments within the limits of available funding.

9 th September 2009 UB – GridPP 23 3 Resources for non-LHC VOs Small experiments seem to prefer to use the Tier-1 –So far the Tier-1 CPU has been under-utilised, but that may change when the LHC starts taking data –Storage can be allocated to small VOs in Castor, but its a complex and heavyweight system if you just want a few TB of disk storage Needs discussion with the Tier-1 team to set up, and takes some time, so give notice well in advance Disk server size gives a quantised allocation –Tier-1 team have to give priority to LHC experiments, but they do support everyone Consider the Tier-2s too –Total resources are comparable to the Tier-1 –DPM Storage Element is much simpler than Castor –Grid model: jobs can run anywhere –Local support if you use your local site

9 th September 2009 UB – GridPP 23 4 Other resources VOMS server: maintained by Manchester File Catalogue: can use the LFC at the Tier-1 WMS (aka Resource Broker): primary at the Tier-1 –Other instances at Glasgow and Imperial User Interface: should be one at your local site –UI at the Tier-1 is restricted, but access can be granted if its needed GANGA: job submission and management toolGANGA –Developed in the UK for ATLAS and LHCb, but now widely used and supported Documentation –GridPP web site:

9 th September 2009 UB – GridPP 23 5 Support for small VOs Janusz Martyniak: software/service supportJanusz Martyniak –Running an LFC –Support for use of VOMS, BDII, … –Gridification Stephen Burke: documentation, advice, troubleshootingStephen Burke –Use of MyProxy –Use of SRM, lcg-utils etc –Middleware support and debugging –Pointers to documentation Ask for help via UB or directly to us –New startup VOs need particular help to get started

9 th September 2009 UB – GridPP 23 6 Other support channels Two weekly meetings, both in EVO: –dteam, Tuesdays at 11 am deals mainly with Tier-2 issues –Tier-1 weekly, Wednesdays at 13:30 –Both have slots for any experiment to discuss problems and requirements GGUS tickets –To report specific technical problems with any Grid site or middleware – Training –Provided for EGEE (and EGI?) by Edinburgh (NESC) –Not sure if any GridPP users have tried it –Do we need something customised for small HEP VOs?

9 th September 2009 UB – GridPP 23 7 Mailing lists UKHEPGRID –General announcements, low volume GRIDPP-USERS –Originally intended for user discussion, but never used in practice –Now used for user-oriented announcements TB-SUPPORT –Discussion list for site admins –Fairly active, user questions would probably get a helpful answer dteam –Internal list for the dteam, but users could address questions to it GRIDPP-UB –UB mailing list, low volume discussion of resource/policy issues

9 th September 2009 UB – GridPP 23 8 General comments Please ask for help!!! –Sometimes people seem reluctant The Grid has a reputation for problems – some justification for that but people may give up too easily –Need to follow up if things arent moving –Sites would like to know whats happening, good or bad –Dont just ask one person, they may be busy or not know the answer It is possible to solve Grid problems … –… sometimes … –Or else there is usually another way – maybe not ideal but things can be made to work, many users are using the Grid successfully

9 th September 2009 UB – GridPP 23 9 Random examples Maradona error –Major cause of job failure since forever –Seems cryptic, hard to understand, people just ignore and resubmit The Grid is broken! –There are several possible causes and it can take some effort to track down, but it can be fixed, it isnt inevitable GGUS ticket to site Cant delete directories in SRM –Technically possible, but there was indeed no easy way –Submitted a savannah bug, fix took a few months to get to production –Now you can do it (lcg-del –d) Using MyProxy to automatically renew a VOMS proxy –Not obvious, but easy when you know how –Now documented:

9 th September 2009 UB – GridPP Summary LHC experiments are bound to get priority, especially as data-taking starts, but GridPP does support the other experiments too UK CPU resources are substantial –A small fraction of a large system is still a lot –Scope for opportunistic use when LHC VOs are quiet The original rationale for the Grid Storage is harder because its long-term –But small experiments probably dont need a huge amount Consider Tier-2 as well as Tier-1 Ask for help, it is available –Dont give up!

9 th September 2009 UB – GridPP UK Grid usage