CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 CMS Status and Plans Progress towards GridPP milestones Workload management.

Slides:



Advertisements
Similar presentations
CMS Applications – Status and Near Future Plans
Advertisements

CMS Report – GridPP Collaboration Meeting IX Peter Hobson, Brunel University4/2/2004 CMS Status Progress towards GridPP milestones Data management – the.
The Quantum Chromodynamics Grid James Perry, Andrew Jackson, Matthew Egbert, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
CMS Grid Batch Analysis Framework
Status Report University of Bristol 3 rd GridPP Collaboration Meeting 14/15 February, 2002Marc Kelly University of Bristol 1 Marc Kelly University of Bristol.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Réunion DataGrid France, Lyon, fév CMS test of EDG Testbed Production MC CMS Objectifs Résultats Conclusions et perspectives C. Charlot / LLR-École.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
London Tier 2 Status Report GridPP 13, Durham, 4 th July 2005 Owen Maroney, David Colling.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Andrew McNab - Manchester HEP - 22 April 2002 UK Rollout and Support Plan Aim of this talk is to the answer question “As a site admin, what are the steps.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CMS Report – GridPP Collaboration Meeting VIII Peter Hobson, Brunel University22/9/2003 CMS Applications Progress towards GridPP milestones Data management.
ATLAS-Specific Activity in GridPP EDG Integration LCG Integration Metadata.
Dave Newbold, University of Bristol24/6/2003 CMS MC production tools A lot of work in this area recently! Context: PCP03 (100TB+) just started Short-term.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
UKQCD QCDgrid Richard Kenway. UKQCD Nov 2001QCDgrid2 why build a QCD grid? the computational problem is too big for current computers –configuration generation.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
The Experiments – progress and status Roger Barlow GridPP7 Oxford 2 nd July 2003.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
University of Bristol 5th GridPP Collaboration Meeting 16/17 September, 2002Owen Maroney University of Bristol 1 Testbed Site –EDG 1.2 –LCFG GridPP Replica.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
WP3 Information and Monitoring Rob Byrom / WP3
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Stickland CMS Core Software and Computing
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
EDG Project Conference – Barcelona 13 May 2003 – n° 1 A.Fanfani INFN Bologna – CMS WP8 – Grid Planning in CMS Outline  CMS Data Challenges  CMS Production.
Real Time Fake Analysis at PIC
U.S. ATLAS Grid Production Experience
Tim Barrass Split ( ?) between BaBar and CMS projects.
UK GridPP Tier-1/A Centre at CLRC
UK Testbed Status Testbed 0 GridPP project Experiments’ tests started
Simulation use cases for T2 in ALICE
Using an Object Oriented Database to Store BaBar's Terabytes
Presentation transcript:

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 CMS Status and Plans Progress towards GridPP milestones Workload management (ICSTM) Monitoring (Brunel) Data management (Bristol) See also separate talks tomorrow by Dave Colling, Owen Maroney All-hands demo See talk/demonstration today by Sarah Marr & Dave Colling Future plans Data challenges in 03/04 Network performance issues

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Production computing Phil Lewis (ICST&M, London): Workload Management Contributed to the multistage MC production Key contribution to current production MC Total number of MC events produced/processed in millions for Q1 and Q2 in 2002

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Production computing Dave Colling (ICST&M, London): Workload Management Contributed to the Multistage MC production This includes the Objectivity part and the ability to run on sites that have no CMS software installed (in which case it does a DAR installation first). This work is only prevented from being very effective because of the GASS Cache Bug. Sheffield demo of the CMS portal Two stages of the MC production ran during the demo Participated in the CMS production tools grid review. This is working towards a unified grid approach for CMS grids in Europe and the States.

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Web portal Sarah Marr (ICST&M, London): Workload management

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Manpower report Barry MacEvoy (Imperial College London): CMS WP8, 0.5 FTE Activities Installation of LCFG server to build and configure testbed nodes on CMS farm Some work on multi-stage Monte Carlo production R-GMA interface to BOSS (in collaboration with Nebrensky et al.) Preparation of CMS demo and associated literature Data analysis architecture design… just started

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Adding RGMA to BOSS Henry Nebrensky (Brunel): Monitoring, FTE BOSS is the job submission and tracking system used by CMS. BOSS is not “GRID enabled” Using EDG (WP3) RGMA release 2 Data is currently sent back directly from WN to the BOSS database on the UI This is being replaced by an RGMA producer and consumer Status today is Operating an RGMA schema and registry server A mock BOSS job in C++ for test purposes exists Currently integrating this code into BOSS

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Testbed, Data Management Owen Maroney (University of Bristol): Data Management Testbed Site Operating with EDG1.2 Testbed in GridPP VO Hosting GridPP VO Replica Catalogue service Data Management Testing of GDMP, Replica Catalogue and Replica Manager services between Bristol, RAL and CERN Regional Data-Centre milestone requirements: To replicate >10TB CMS data between CERN and RAL in >17k files using Grid tools Store on Datastore MSS at RAL To be accessible anywhere on the Grid through the Replica Manager services

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Grid Object Access Tim Barrass (University of Bristol) Entered post 1 September Implementation and testing of the new persistency layer for CMS. Testing / rollout at T1 and interface to Grid services the main interest Using the POOL framework under development. Also associated with BaBar (50%); will help with immediate developments of their data storage model.

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Data Challenge DC04 is a crucial milestone for CMS computing An end-to-end test of our offline computing system at 25% scale Simulates 25Hz data luminosity, for one month Tests software, hardware, networks, organisation The first step in the real scale-up to the exploitation phase Data will be directly used in preparation of Physics TDR The steps: Generate simulated data through worldwide production (‘DC03’) Copy raw digitised data to CERN Tier-0 ‘Play back’ through the entire computing system - T0, 2 or 3 protoT1’s operational (US, UK, …), many T2s. Analyses, calibrations, DQM checks performed at T2, T3 centres Grid m’ware is an important part of the computing system

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Data Challenge DC03 in the UK (starts July 03, five months) Plan to produce ~50TB of GEANT4 data at T1 and T2 sites, starting July ’03 All data stored at RAL: this means 60Mb/s continuously into the RAL datastore for 4-5 months Data digitised at RAL with full background; 30TB of digis shipped to CERN at 1TB/day (>100Mb/s continuously over WAN) New persistency layer (POOL?) used throughout DC04 in the UK (Feb ’04) ~30 TB transferred to Tier-1 in one month (100Mb/s continuous) Data replicated to Tier-2 sites upon demand Full analysis framework in place at Tier-1, Tier-2, Tier-3 sites Some very serious technical challenges here The work starts now; CMS milestones oriented accordingly If Grid tools are to be fully used, external projects must deliver

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Network Performance Networks are a big issue All Grid computing is reliant upon high-performance networks But: data transfer was a bottleneck in previous data challenges Not through lack of infrastructure – we just don’t know how to use the network to its full capability (it is highly non-trivial) CMS peak utilisation of the 1Gbit/s+ b/w RAL -> CERN is <100Mbit/s Fast data replication underlies the success of DC04 Some initial progress in this area in 2002 BaBar, CMS (+?) using smart(er) transfer tools with good results Contacts made with PPNCG / WP7 Discussion at last EB/TB session CMS, BaBar, CDF/D0 talks at last week’s PPNCG meeting Starting to get a feel for where the bottlenecks are Most often in the local infrastructure

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Networks - future requirements Where now? CMS needs a substantial improvement in data handling capability and throughput to CERN, US by mid-2003 All experiments will eventually face these problems Strong expertise in storage and networks exists within the UK and elsewhere – we should use it First steps: Practical real-world tests on the production network from the UK Tier-1/A to UK, CERN, US, with experts in attendance Compare with best case results from ‘tuned’ setup Provide dedicated test servers at UK T1, T2 sites so that we can find the bottlenecks Will need to be highly-specified machines, and will need system management support at RAL Work to see how this relates to the SE architecture, and test Must balance flexibility / robustness with throughput

CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 Summary All three UK sites working in a coherent fashion Significant progress in all areas UK has made a major contribution to production MC for CMS Bristol now hosts the VO replica catalogue Contributed to the “Hands On” meeting All tasks are currently on target to meet their milestones. BUT Major Data Challenges coming up in 2003 and 2004 Technical challenges in efficient use of WAN capacity