CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Level 1 Components of the Project. Level 0 Goal or Aim of GridPP. Level 2 Elements of the components. Level 2 Milestones for the elements.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Reporting of the Experiments Follow procedures set up for technical WP of EDG Spreadsheet report man month effort Pro-forma reply sheet Pro-forma sheet.
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
Réunion DataGrid France, Lyon, fév CMS test of EDG Testbed Production MC CMS Objectifs Résultats Conclusions et perspectives C. Charlot / LLR-École.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CMS Report – GridPP Collaboration Meeting VIII Peter Hobson, Brunel University22/9/2003 CMS Applications Progress towards GridPP milestones Data management.
Dave Newbold, University of Bristol24/6/2003 CMS MC production tools A lot of work in this area recently! Context: PCP03 (100TB+) just started Short-term.
D. Britton GridPP Status - ProjectMap 22/Feb/06. D. Britton22/Feb/2006GridPP Status GridPP2 ProjectMap.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
D. Newbold / D. BrittonGridPP Collaboration Meeting, 5/11/2001 CMS Status & Requirements Topics covered: CMS Grid Status CMSUK approach to Grid work First.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
UKQCD QCDgrid Richard Kenway. UKQCD Nov 2001QCDgrid2 why build a QCD grid? the computational problem is too big for current computers –configuration generation.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 CMS Status and Plans Progress towards GridPP milestones Workload management.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
WP8 Status – Stephen Burke – 30th January 2003 WP8 Status Stephen Burke (RAL) (with thanks to Frank Harris)
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
CMS Stress Test Report Marco Verlato (INFN-Padova) INFN-GRID Testbed Meeting 17 Gennaio 2003.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
The Experiments – progress and status Roger Barlow GridPP7 Oxford 2 nd July 2003.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
WP3 Information and Monitoring Rob Byrom / WP3
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Stickland CMS Core Software and Computing
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Dave Newbold, University of Bristol21/3/2001 (Short) WP6 Update Where are we? Testbed 0 going (ish); some UK sites being tried out for production (mostly.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
BaBar-Grid Status and Prospects
Gavin McCance University of Glasgow GridPP2 Workshop, UCL
Moving the LHCb Monte Carlo production system to the GRID
Grid related projects CERN openlab LCG EDG F.Fluckiger
UK GridPP Tier-1/A Centre at CLRC
US ATLAS Physics & Computing
Gridifying the LHCb Monte Carlo production system
Status and plans for bookkeeping system and production tools
Presentation transcript:

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload management (Imperial) Monitoring (Brunel) Data management (Bristol) Future plans CMS Data challenge Issues Network performance EDG Bristol, Brunel and Imperial (1.5 GRIDPP FTE in total)

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS deliverables Table shows the deliverables and milestones and our progress in the first year of the GRIDPP project.

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Production Monte Carlo GRIDPP deliverable “Production Monte Carlo for CMS” achieved on 30/9/2002 Key work since then was the “Stress Test” Verification of the portability of CMS production environment into a grid environment; Verification of the robustness of the European DataGrid middleware in a production environment; Production of data for physics studies of CMS, possibly at the level of 1 million simulated events, which corresponds to (assuming 250 events per job): 8000 jobs (4000 CMKIN and 4000 CMSIM) submitted to the system; 8000 files created and registered to the Replica Management System (4000 ntuples and 4000 FZ files); about 11 CPU years on a PIII 1GHz CPU about 1.8 TB of data in FZ files

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Production Monte Carlo

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Production Monte Carlo Imperial, UK

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Prototype Tier 1 centre GRIDPP deliverable “Prototype Regional Data Centre” is currently late with an estimated delivery date of 28/2/2003 Large scale testing of grid data management software took place as part of CMS Stress Test first time all components necessary for the Regional Data Centre available 2,147 fz files actually produced: ~ 500Gb data successfully transferred using automated grid tools during production, including transfer to and from mass storage systems at CERN and Lyon Replica Catalog (RC) service failed completely when the number of files registered was still significantly below that necessary for a Regional Data Centre even when functioning, slow performance of RC was major bottleneck on file transfer speeds (eg. for a 200Mb file, could take 2mins to transfer, of which only 20s was actually file copying) EDG have already planned replacement for the RC. This should fix problems, but is not scheduled for release until Testbed 2

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Adding RGMA to BOSS GRIDPP deliverable “Initial Monitoring” achieved on 29/11/2003 BOSS is the job submission and tracking system used by CMS but the monitoring part of BOSS is not “GRID enabled” Delivered Functionality A dummy BOSS producer was written and used to check the transfer of job monitoring data via R-GMA from Imperial to Brunel (and vice versa) and for tasks lasting >24 hours. An R-GMA producer has been integrated into the BOSS job wrapper dbUpdator, and a separate “receiver” written that consumes information from R-GMA and updates the local BOSS database. The producer/consumer have been tested successfully with multiple simultaneous jobs running on a machine at Brunel. Deployment on to the testbed was postponed until after the CMS stress test and is now in progress.

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Adding RGMA to BOSS Only the interface classes are shown, much more information is available from our web pages at

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Grid Object Access Looking for a replacement for Objectivity. Work commenced on the POOL project. First internal release Oct First public release Dec Bristol is developing applications to stress-test POOL public release as a CMS project. Excellent crossover with BaBar as Bristol is also involved with redesign of the BaBar event store. BaBar is evaluating POOL as possible candidate framework.

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Future: Data Challenge DC04 is a crucial milestone for CMS computing An end-to-end test of our offline computing system at 25% scale Simulates 25Hz data luminosity, for one month Tests software, hardware, networks, organisation The first step in the real scale-up to the exploitation phase Data will be directly used in preparation of Physics TDR Grid middleware is an important part of the computing system. What is used in the next few months is likely to be what will be used until (no refactoring). DC03 in the UK (starts July 03, five months) Plan to produce ~50TB of GEANT4 data at T1 and T2 sites, starting July ’03 All data stored at RAL: this means 60Mb/s continuously into the RAL datastore for 4-5 months Data digitised at RAL with full background; 30TB of digis shipped to CERN at 1TB/day (>100Mb/s continuously over WAN)

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Data Challenge Some very serious technical challenges here The work starts now; CMS milestones oriented accordingly If Grid tools are to be fully used, external projects must deliver

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Network Performance Networks are a big issue All Grid computing is reliant upon high-performance networks But: data transfer was a bottleneck in previous data challenges Some initial progress in this area in 2002 CMS (and BaBar) using smart(er) transfer tools with good results Contacts made with PPNCG / WP7 and UCL Starting to get a feel for where the bottlenecks are Most often in the local infrastructure Progress in real understanding is being made

CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 Summary Sucesses RGMA Producer and Consumer tools for BOSS job monitoring were released by Brunel. ORCA-7, built upon a prototype of the new root-based persistency solution, has been released, and initial performance testing of this release is in progress at Bristol. New Grid-enabled Impala & Boss framework released, used during CMS stress test at RAL and Imperial. Some “CMS” GRIDPP software being evaluated by BaBar UK Problems Deliverable has slipped, and is now expected to be achieved by the end of Feb ’03. The primary reasons for this slippage are: very late release of “stable” EDG software functional problems with the data management components of the EDG release. These risks were noted in the original deliverables planning.