GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
WLCG/8 July 2010/MCSawley WAN area transfers and networking: a predictive model for CMS WLCG Workshop, July 7-9, 2010 Marie-Christine Sawley, ETH Zurich.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Introduction to CMS computing J-Term IV 8/3/09 Oliver Gutsche, Fermilab.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
1 M. Paganoni, HCP2007 Computing tools and analysis architectures: the CMS computing strategy M. Paganoni HCP2007 La Biodola, 23/5/2007.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
The Computing Project Technical Design Report LHCC Meeting June 29, 2005.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
Progress in Computing Ian Bird ICHEP th July 2010, Paris
Dan Tovey, University of Sheffield User Board Overview Dan Tovey University Of Sheffield.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
SC4 Planning Planning for the Initial LCG Service September 2005.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Bob Jones EGEE Technical Director
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
ALICE internal and external network
Data Challenge with the Grid in ATLAS
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
CMS computing: model, status and plans
Disk capacities in 2017 and 2018 ALICE Offline week 12/11/2017.
Simulation use cases for T2 in ALICE
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
Heavy Ion Physics Program of CMS Proposal for Offline Computing
The ATLAS Computing Model
The LHCb Computing Data Challenge DC06
Presentation transcript:

GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass reconstruction RAW -> RECO; curation / distribution to T1 à Tier-1 roles: n Custodial storage of RAW data (one copy split across all T1) n Storage of RECO fraction and full AOD for fast access n nth-pass reconstruction (three times per year) n Physics group bulk analysis / skimming n Data serving to and receipt from T2 à Tier-2 roles: n End-user analysis; Simulation capacity n Possibility of special functions (calibration, HI reconstruction) à CTDR assumed roughly hierarchical T1-T2 relationship

GDB, 07/06/06 CMS T1 Sites à ‘Nominal Tier-1’ size in 2008: n 2.5MSI2K CPU / 1.2PB disk / 2.8PB tape n Computing model assumed 6-7 nominal sites à Current CMS T1 sites n France (CCIN2P3) - 60% nominal* n Germany (GridKa) - 50% nominal* n Italy (CNAF) - 75% nominal* n Spain (PIC) - 30% nominal* n Taiwan (ASCC) - 60% nominal* n UK (RAL) - 20% nominal; 60% nominal requested by 2008 n US (FNAL) - 170% nominal n *Note also - 65% total shortfall in tape in non-US centres n Look to correct this through rebalancing of centre resources

GDB, 07/06/06 CMS T2 Sites and Affiliations CountryT2 sitesT1CountryT2 sitesT1 Belgium1 federatedIN2P3Italy4CNAF Brazil1FNALKorea1 China1Pakistan1ASGC Croatia1Poland1GridKa Estonia1RALPortugal1PIC Finland1Russia1 federated France1IN2P3Spain1PIC Germany1 federatedGridKaSwitzerland1GridKa Greece1CNAFTaiwan1ASGC Hungary1CNAFUK4RAL India1ASGCUS8FNAL à Overall: >36 T2 sites, varying considerably in size n Computing model assumed nominal centres

GDB, 07/06/06 Evolving the Model à T1-T2 balance a function of location n US: Site connections well understood, capable T1 centre n Asia: Network situation evolving, plans mostly centred around ASGC T1 n Europe: Fragmented T1 resources, but excellent networking CMS T1s are often small fractions of highly capable centres à Consider a more ‘mesh’ like T1-T2 model in Europe n T2 centres connect to all T1 centres for data access Connect to an assigned T1 centre for MC output n Replicate ‘hot’ AOD/RECO across T1s according to T2 load n Advantages: Levels the load across the system, lowers peak load on a given T1 centre Could lower AOD disk requirements considerably n Issues: Places strong demands on T2 international connectivity Marginally increases the international data flow w.r.t. strict hierarchy

GDB, 07/06/06 Top-Down WAN Requirements à T1 centre dataflows (nominal T1): n T0 -> T1: 375TB of RAW/RECO in 100d LHC run: 1.5Gb/s Similar requirement outside running period n T1 T1 reproc AOD: 75TB AOD in 7d: 4Gb/s peak NB: very low duty cycle, 5% n T1 -> T2: 60TB typ. RECO/AOD in 20d to 5 centres: 3Gb/s T2 -> T1 is a much smaller dataflow, though reliability is important à For ‘meshed’ T1, add: n T1 T1 RECO replication: 30% of RECO within 7d?: 0.5Gb/s peak Assumes 30% replication for each re-proc pass, across T1s Low duty cycle - ‘In the shadow’ of the T1 T1 AOD requirement à T2 centre dataflows (nominal T2): n 60TB typ. In 20d: 0.6Gb/s n For ‘meshed’ centres, needs to be available to any T1 à NB: Safety factor of 2 on these bandwidths (as in CTDR)

GDB, 07/06/06 T2 Support Model à Support areas for T2 sites n Grid infrastructure / operational issues n CMS technical software / database / data management issues n Physics application support n Production operations and planning à For CMS issues, a ‘flat’ mutual support model is in place n T1/T2 sites receive and provide support within integration subproject n Technical software and application support from technical subproject New T2 sites ‘kickstarted’ with concentrated hands-on effort from experts n Model appears to scale so far; many sites taking part in SC4 tests. n Unclear that direct T1-T2 support is possible with very limited effort at T1 à Grid operational issues n We assume are handled through ROC system. n Good ongoing liaison will be needed to distinguish applications versus infrastructure issues - coming adoption of FTS will be a new test case

GDB, 07/06/06 Example: PhEDEx load test

GDB, 07/06/06 Moving Forward à Tier-2 site engagement n Large majority of Tier-2 sites are now active in setup/testing n Around 22/36 taking part in data transfer tests at some level n The mutual support model appears to be scaling so far à Network questions n network provision is not clear at every Tier-2 site In many cases, provision is requested but not yet firm n Work urgently with sites to resolve this and judge viability of a ‘mesh’ model n Work closely with LCG network group and others to determine best strategy for use of international networks OPN versus GEANT2 for T1-T2 international traffic? n Identify any network ‘holes’ rapidly and work to close them à Next few months will see substantial further progress n All CMS Tier-2 sites active, and involved in SC4 and/or CMS CSA2006 tests