16 Sep 2002F Harris GridPP Imperial College1 WP8 Status and Plans F Harris (Oxford/CERN)

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 15 February 2002 Testbed Release in the UK EDG Testbed 1 GridPP sources of information GridPP VO GIIS and Resource Broker.
Advertisements

DataGrid is a project funded by the European Union CHEP 2003 – March 2003 – HEP Assessment of EDG – n° 1 HEP Applications Evaluation of the EDG Testbed.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
WP2: Data Management Gavin McCance University of Glasgow November 5, 2001.
Stephen Burke - WP8 Status - 9/5/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Stephen Burke - WP8 Status - 14/2/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
Partner Logo UK GridPP Testbed Rollout John Gordon GridPP 3rd Collaboration Meeting Cambridge 15th February 2002.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
27-29 September 2002CrossGrid Workshop LINZ1 USE CASES (Task 3.5 Test and Integration) Santiago González de la Hoz CrossGrid Workshop at Linz,
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
1 DataGRID Application Status and plans
EU 2nd Year Review – Feb – Title – n° 1 WP8: Progress and testbed evaluation F Harris (Oxford/CERN) (WP8 coordinator )
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
5 Sep 2002F Harris Plenary Budapest1 WP8 Report F Harris (Oxford/CERN)
Steve Traylen Particle Physics Department EDG and LCG Status 9 th December 2003
CMS Stress Test Report Marco Verlato (INFN-Padova) INFN-GRID Testbed Meeting 17 Gennaio 2003.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
First attempt for validating/testing Testbed 1 Globus and middleware services WP6 Meeting, December 2001 Flavia Donno, Marco Serra for IT and WPs.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
DataGrid is a project funded by the European Commission under contract IST F. Harris DataGrid EU Review 19 Feb 2004 WP8 HEP Applications Final.
Tier1A Status Andrew Sansum 30 January Overview Systems Staff Projects.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
Software Release Procedure Session MANAGEMENT and FUTURE directions Form ongoing ‘empowered’ Technical Discussion Group (TDG) Any future project definitions(resources.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
2 Sep 2002F Harris EDG/WP6 meeeting at Budapest LHC experiments use of EDG Testbed F Harris (Oxford/CERN)
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
INFSO-RI Enabling Grids for E-sciencE gLite Certification and Deployment Process Markus Schulz, SA1, CERN EGEE 1 st EU Review 9-11/02/2005.
Oxana Smirnova LCG/ATLAS/Lund August 27, 2002, EDG Retreat ATLAS-EDG Task Force status report.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
Dave Newbold, University of Bristol21/3/2001 (Short) WP6 Update Where are we? Testbed 0 going (ish); some UK sites being tried out for production (mostly.
Status of Task Forces Ian Bird GDB 8 May 2003.
BaBar-Grid Status and Prospects
Moving the LHCb Monte Carlo production system to the GRID
Summary on PPS-pilot activity on CREAM CE
INFN-GRID Workshop Bari, October, 26, 2004
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Status and plans for bookkeeping system and production tools
Presentation transcript:

16 Sep 2002F Harris GridPP Imperial College1 WP8 Status and Plans F Harris (Oxford/CERN)

16 Sep 2002F Harris GridPP Imperial College2 Outline of presentation Overview of experiment plans for use of Grid facilities/services for tests and data challenges –ATLAS –ALICE –CMS –LHCb –BaBar –D0 Status of ATLAS/EDG Task Force work Essential requirements for making 1.2.n usable by broader physics user community Future activities of WP8 and some questions Summary

16 Sep 2002F Harris GridPP Imperial College3 ATLAS Currently in middle of Phase1 of DC1 (Geant3 simulation,Athena reconstruction,analysis). Many sites in Europe+US+Australia,Canada,Japan,Taiwan,Israel and Russia are involved Phase2 of DC1 will begin Oct-Nov 2002 using new event model Plans for use of Grid tools in DCs –Phase1 Atlas-EDG Task Force to repeat with EDG 1.2. ~1% of simulations already done. –Using CERN,CNAF,Nikhef,RAL,Lyon –9 GB input 100 GB output 2000 CPU hrs –Phase2 will make larger use of Grid tools. Maybe different sites will use different tools. There will be (many?) more sites. This to be defined Sep ~10**6 CPU hrs 20 TB input to reconstruction 5TB output (? How much on testbed?)

16 Sep 2002F Harris GridPP Imperial College4 ALICE Alice assume that as soon as a stable version of 1.2.n is tested and validated it will be progressively installed on all EDG testbed sites As new sites come will use an automatic tool for submission of test jobs of increasing output size and duration at the moment do not plan a "data challenge" with EDG. However plan a data transfer test, as close as possible to the expected data transfer rate for a real production and analysis Will concentrate the AliEn/EDG interface and on the AliRoot/EDG interface in particular for items concerning the Data Management. Will use CERN, CNAF,Nikhef, Lyon,Turin,Catania for first tests CPU and store requirements can be tailored to availability of facilities in testbed – but will need some scheduling and priorities

16 Sep 2002F Harris GridPP Imperial College5 CMS Currently running production for DAQ Technical Design Report(TDR) Requires full chain of CMS software and production tools. This includes use of Objectivity.(licensing problem in hand..) 5% Data Challenge(DC04) will start Summer 2003 and will last ~ 7 months. This will produce 5*10**7 events. In last month all data will be reconstructed and distributed to Tier1/2 centres for analysis. –1000 CPUs for 5 months 100 TB output (LCG prototype) Use of GRID tools and facilities –Will not be used for current production –Plan to use in DC04 production –EDG 1.2 will be used to make scale and performance tests (proof of concept). Tests on RB, RC and GDMP. Will need Objectivity for tests. IC,RAL,CNAF/BO,Padova,CERN,Nikhef,IN2P3,Ecol- Poly,ITEP Some sites will do EDT +GLUE tests CPU ~50 CPUs distributed Store ~ 200 Gb per site –V2 will be necessary for DC04 starting summer 2003(has functionality required by CMS)

16 Sep 2002F Harris GridPP Imperial College6 LHCB First intensive Data Challenge starts Oct 2002 – currently doing intensive pre-tests at all sites. Participating sites for 2002 –CERN,Lyon,Bologna,Nikhef,RAL –Bristol,Cambridge,Edinburgh,Imperial,Oxford,ITEP Moscow,Rio de Janeiro Use of EDG Testbed –Install latest OO environment on testbed sites. Flexible job submission Grid/non-Grid –First tests(now) for MC + reconstruction +analysis with data stored to Mass Store –Large scale production tests(by October) –Production (if tests OK) Aim to do percentage of production on Testbed Total reqt is 500 CPUs for 2 months + ~ 10 TB –(10% should be OK on testbed?)

16 Sep 2002F Harris GridPP Imperial College7 BaBar Grid and EDG Target: have some production environment ready for all users by the end of this year –with attractive interface tools –Customised to SLAC site Have implemented local hacks to overcome problems with –use of LSF Batch Scheduler(uses AFS) –AFS File System used for User Home Directories –Batch Workers located inside of the IFZ (security issue) Three parts of the Globus/EDG software were installed at SLAC: CE, WN and UI The exercise clearly showed that they are running fine altogether, and also with the RB at IC Had problems with old version of RB. Problems should largely go away with latest version BaBar now have D.Boutigny on WP8/TWG

16 Sep 2002F Harris GridPP Imperial College8 D0 (Nikhef) Have already ran many events on the testbeds of NIKHEF and SARA Wish to extend tests to the whole testbed D0 rpm's are already in the EDG releases and will be installed on all sites. Will set up a special VO and RC for D0 at NIKHEF on a rather short time scale. Jeff Templon, NIKHEF rep. in WP8, will report on work

16 Sep 2002F Harris GridPP Imperial College9 ATLAS-EDG task force: members and sympathizers ATLASEDG Jean-Jacques BlaisingLaura PeriniIngo Augustin Frederic BrochuGilbert PoulardStephen Burke Alessandro De SalvoAlois PutzerFrank Harris Michael GardnerDi QingBob Jones Luc GoossensDavid RebattoEmanuele Leonardi Marcus HardtZhongliang RenMario Reale Roger JonesSilvia ResconiMarkus Schulz Christos KanellopoulosOxana SmirnovaJeffrey Templon Guido NegriStan Thompson Fairouz Ohlsson-MalekLuca Vaccarossa Steve O'Neale

16 Sep 2002F Harris GridPP Imperial College10 Achievements so far see A team of hard-working people across Europe in Atlas and EDG (middleware + WP6 +WP8) has been set up (led by O Smirnova with help from R Jones and F Harris) ATLAS software (release 3.2.1) is packed into relocatable RPMs, distributed and validated elsewhere Following removal of the GASS Cache fix in EDG, 50% of the planned challenge is performed (5 researchers × 10 jobs) – only CERN testbed was fully available to start, but this is changing fast

16 Sep 2002F Harris GridPP Imperial College11 In progress: New set of challenges, including smaller input files –Presentation and first results: Luc Goossens All the core Testbed sites (1.2.2) are becoming available + FZK => the rest of the challenge has a chance to be really distributed Big file replication can be done, avoiding GDMP & Replica Manager With distributed input files, several jobs already have been steered by the RB to NIKHEF, following the requested input data. The rest of the batch went to CERN Report in preparation

16 Sep 2002F Harris GridPP Imperial College12 Bottom line for Task Force: Major obstacles: –GASS Cache limitations (long jobs vs frequent submission) – being worked on –File transfer time limit in data management tools – hopefully can be addressed soon Still, the ways around are known and quick fixes are deployed, allowing to run production- like jobs The whole EDG middleware is pretty much in the development state, and things are changing (improving!) on a daily basis

16 Sep 2002F Harris GridPP Imperial College13 Essential requirements for making 1.2.n usable by broader physics user community Top level requirements Production testbed to be stable for weeks, not hours, and allow spectrum of job submissions Have reasonably easy to use basic functions for job submission, replica handling and mass storage utilisation Good concise user documentation for all functions Easy for user to get certificates and to get into correct VO working environment So what happens now in todays reality? having had very positive discussions at Budapest in joint meetings with Workpackages –gass-cache and 20 min file limit problems are absolute top priority – being pursued with patches right now. Lets hope we dont need new version of Globus! – wrap data management complexity while waiting for version 2. (GDMP is too complex for average user) – trying out interim RM for single files. –We need to clarify use of mass store(Castor,HPSS,RAL store) by multi-VOs e.g how is store partitioned between VOs, and how does non-Grid user access data Discussions ongoing and interim solutions being worked on

16 Sep 2002F Harris GridPP Imperial College14 More essential requirements on use of 1.2 We must put people and procedures in place for mapping VO organisation onto test bed sites (e.g. quotas, priorities) We must clarify user support at sites (middleware + applications) Installation of applications software –should not be combined with the system installation Authentication & authorisation –Can we streamline this procedure? (40-odd countries to accommodate for Atlas!) Documentation (+ Training - EDG tutorials for experiments) –Has to be user-oriented and concise –Much good work going on here (user guide+examples). About to be released

16 Sep 2002F Harris GridPP Imperial College15 Some longer term requirements Job Submission to take into account availability of space on SEs and quota assigned to user (e.g. for macro-jobs, say 500 each generating 1 GB) Mass Store should be on Grid in a transparent way (space management, archiving,staging) Need easy to use replica management system Comments – Are some of these 1.2.n rather than 2, i.e. increments in functionality in successive releases? –Task Force people should maintain continuing dialogue with developers –(should include data challenge managers from all VOs in dialogue)

16 Sep 2002F Harris GridPP Imperial College16 Future activities of WP8 and some questions The mandate of WP8 is to facilitate the interfacing of applications to EDG middleware, and participate in the evaluation and produce the evaluation reports (start writing very soon!). Loose Cannons have been heavily involved in testing middleware components, and have produced test software and documentation. This should be packaged for use by the Test Group(now strengthened and formalised). LCs will be involved in liasing with the experiments testing their applications. The details of how this relates to the new EDG/LCG Testing/Validation procedure have to be worked out. WP8 have been involved in the development of application use cases and participate to current ATF activities. This is continuing. LCG via GDB to carry this on in broader sense. We are interested in the feasibility of a common application layer running over middleware functions. This issue goes into the domain of current LCG deliberations.

16 Sep 2002F Harris GridPP Imperial College17 Summary Current WP8 top priority activity is Atlas/EDG Task Force work –This has been very positive. Focuses attention on the real user problems, and as a result we review our requirements, design etc. Remember the eternal cycle! We must maintain flexibility with continuing dialogue between users and developers. Will continue Task Force flavoured activities with the other experiments Current use of Testbed is focused on main sites (CERN,Lyon,Nikhef,CNAF,RAL) – this is mainly for reasons of support given unstable situation Once stability is achieved (see Atlas/EDG work) we will expand to other sites. But we should be careful in selection of these sites in the first instance. Local support would seem essential. WP8 will maintain a role in architecture discussions, and maybe be involved in some common application layer developments THANKS To members of IT and the middleware WPs for heroic efforts in past months, and to Federico for laying WP8 foundations