Download presentation
Presentation is loading. Please wait.
Published byEvan Robbins Modified over 8 years ago
1
1
2
Maria Girone, CERN Highlights of Run I Challenges of 2015 Mitigation for Run II Improvements in Data Management, Organized Processing and Analysis Tools Commissioning Activities for Run II Outlook 2
3
Maria Girone, CERN Even the first year the Tier-2 utilization is 100% of pledge Tier-1s are completely used from year 2, following the machine lifetime ramp-up Usage keeps pace with yearly increases Opportunistic access at Tier-2s 3
4
Maria Girone, CERN 2012 was the busiest of the Run I years 7B physics events were collected (including parked data) 4B prompt-reco – all 2012 data was reprocessed in 2013 6B new simulation events were produced Including reprocessing of all Run I simulation. there are nearly 13B reconstructed MC events 4
5
Maria Girone, CERN Looking at the same 2012 data from a storage perspective Recorded close to 6PB of 2012 data on tape 2PB raw, 4PB prompt reco (reco+aod) and 2.8PB of reprocessed data on tape MC simulation produced close to 13PB on tape Majority kept on disk for 2012 5
6
Maria Girone, CERN Organized analysis processing in CMS is submitted with the CMS Remote Analysis Builder (CRAB) Number of users has been 300-400 per week since the beginning, but the number of job slots grows exponentially 6
7
Maria Girone, CERN Full-mesh data transfer has been a big success of CMS Peaks ~4PB/month Tier-1s to any Tier-2 has evolved also to Tier-2 to Tier-2 Tier-1s to any Tier-2 is more than 50% of total traffic 7
8
Maria Girone, CERN By the end of 2012 CMS Computing was running reasonably efficiently Data was collected at 1kHz, but half was parked for later processing The prompt reconstruction was good enough for many analyses, though 2011+2012 data was re-processed in 2013 for final publications (10B events) Simulation was generally processed once and samples were available in time (MC/data=1.1) Computing was used at 100% and there were not large problems discovered that required recovering many lessons were learned over the three years 2012 and 2010 looked very different in terms of efficiency 8
9
Maria Girone, CERN The situation for 2015 is much more challenging The scale of the computing problem increases by factor 6 in 2015 A factor 2.5 in event reconstruction time due to higher pile-up and event complexity (we factored in some anticipated improvements of ~20%) A factor of 2.5 with 1kHz of prompt reconstruction Foreseen resource increase is of at most a factor 2 Mitigation assumes less reprocessing, budgeted simulation and maximum flexibility in how the experiment will use resources It assumes that the level of organization and efficiency is similar to 2012 9
10
Maria Girone, CERN CMS requests for 2015 and beyond are based on a flat budget (since 2012 and with no major acquisitions in 2013 and 2014) which, thanks to technology implies some growth: 20% CPU, 15% Disk, and 15% tape The requests rely on the following assumptions: Reduction in the number of reprocessing passes There is enough reprocessing capacity to make one complete pass of the 2015 data during the winter shutdown – as long as we can use both the HLT farm and the Tier-0 Reduction in the ratio of Monte Carlo events to the number of real data events Assumes a ratio of 1.3 and the bulk can be reprocessed only once Little contingency for mistakes 10
11
Maria Girone, CERN Computing has been spending LS1 to improve flexibility and efficiency to the greatest possible extent, but there are limits to what one can gain! Full flexibility in site capability does not increase the total computing budget, but it allows prioritization of what to complete in a resource constrained environment In LS1, Computing has been evolving around two concepts Efficiency in how resources are used Dynamic data placement and cache release Single global queue for analysis and production with Glide-In WMS New distributed analysis tools for users (CRAB3) and a new condensed analysis data format Flexibility in where resources are used Dedicated cloud resources from AI (Tier-0) and HLT Opportunistic Computing Bootstrapping CMS environment on any resource using pilots, utilizing allocations on super computers and leadership class machines Academic Cloud provisioned resources We are developing the ability to share workflows across sites and access the input from remote storage Data Federation and disk/tape separation 11
12
Maria Girone, CERN 12
13
Maria Girone, CERN Following the example of Tier-0, CMS asked all Tier-1 sites to perform a logical decoupling of disk and archival (tape) resources Provides predictability of storage (datasets are subscribed and remain on disk until deleted) Better manages data on disk for data federation (otherwise data federations could trigger unintended staging) Custodial data goes to archive upon subscription (allows several sites to work on a single dataset and subscribe to one archive service) Ensures archive resources are only used with explicit request Reduces the functional difference between Tier-1 and Tier-2 sites 13 Tier- 1 Tier- 2 Tier- 0 Tier- 1 Tier- 2 CAF
14
Maria Girone, CERN CMS discovered in Run1 that while the storage was full, it was not efficiently used Samples were stored and not accessed. To mitigate we are development routines to dynamically place and replicate the data, and tools to release the cache of samples that are not accessed The development is progressing Prototype placement and clean-up algorithms are in place The current CSA14 and upgrade samples are placed at Tier-2 sites automatically and the centrally controlled space is under automatic management Next step is to more aggressively clear the caches of underutilized samples Will be exercised during the CSA14 challenge and monitor that we improve the storage utilization efficiency 14
15
Maria Girone, CERN Small penalty in CPU efficiency for analysis jobs that read data remotely using data federation Provide access to all CMS data for analysis over the wide area through Xrootd access 20% of data across wide area, 60k files/day, O(100TB)/day File-opening test complete Sustained rate is 200Hz – much higher than what we expect in production use File-reading test: we see about 400MB/s (35TB/day) read over the wide area Thousands of active transfers Purely Local Reading Purely Federation Reading 15
16
Maria Girone, CERN CMS would like to make use of data federations for reprocessing use cases Allowing shared production workflows to use CPU at multiple sites and data served over the data federation FNALCNAF We had an unintended proof of principle. FNAL accidently pointed to an obsolete DB and production traffic failed over to Xrootd Data was read from CNAF disk at 400MB/s for 2 days Regular tests have been added to our 2014 schedule, also during CSA14; larger reprocessing campaigns will be executed in Q3-Q4 2014 16
17
Maria Girone, CERN 17
18
Maria Girone, CERN The first step in mitigating the deficit is to look for additional computing The CMS higher level trigger farm is a large processing resource next to the detector that is unused during the winter shutdown and largely idle when there is no beam Adding these resources to the yearly data processing increases the Tier-1 capacity by 50% for the months it’s available Interfill use is under investigation for opportunistic access Commissioning it for offline processing has taken a year Cloud interfaces to virtual machines chosen to dynamically build up and bring down the farm introduce a firewall between the VMs for processing and the physical hosts for triggering Upgraded link to the computer center to provide sufficient access to storage 18
19
Maria Girone, CERN We are planning to provision the Tier-0 resources (at CERN and Wigner) directly on OpenStack through glide-In WMS Same VM contextualization as for the HLT Wigner CPU efficiency has been measured for both CMS production and analysis jobs We see only a small decrease in CPU efficiency in I/O bound applications reading data from EOS Meyrin CMSSW is optimized for high latency I/O performance using advanced TTree secondary caching techniques Analysis jobs: 6% drop 19
20
Maria Girone, CERN Excellent test of a thorough commissioning work of 6000 virtual cores set up as Openstack Cloud resources Experience indicates we will be able to start all virtual machines on the HLT quickly compatibly with usage in inter-fill periods, which can provide some contingency with additional resources during the year The HLT, with an AI contribution, has been very successfully used for a high-priority data reprocessing of the 2011 Heavy Ion data – Made a very tight schedule of one month before Quark Matter 2014 – Whole node cloud provisioning allows to provide higher memory slots for HI – Upgraded network allows high CPU efficiency for data reprocessing using the whole farm No bottleneck reading from EOS Events Processed Heavy Ion 2011 Data HLT CPU Efficiency 20
21
Maria Girone, CERN 21
22
Maria Girone, CERN CMS is currently commissioning a mini version of the dominant analysis format The MiniAOD is more than a factor of 5 smaller than the standard AOD and is intended to end-user analysis Less recalculation of quantities is expected and is closer to the final analysis ntuples The usefulness of the format is being evaluated over the summer, and provides the opportunity to store a much larger number of events per site The current AOD reads at 1Hz, and is CPU bound. The MiniAOD is intended to be IO bound Impact on the computing system will be studied over the summer 22
23
Maria Girone, CERN CRAB3 is currently in integration and validation tests before being handed to users for CSA14 Improved elements for job resubmission and task completion, output data handling and monitoring 20k running cores have been achieved in the validation, production target is 50% higher Physics power users are expected to give feedback over summer Underlying services have also been updated FTS3 for output file handling Central Condor queue for scheduling 23
24
Maria Girone, CERN CMS is preparing for nominal LHC beam conditions of Run II performed in an Computing, Software and Analysis Challenge (CSA14) Unique opportunity for early discovery we need to make sure that all the key components will be in place and working since the very beginning, to ensure this CMS is performing a Computing Software and Analysis Challenge in July and August Users will perform analysis exclusively using CRAB3, Data Federation and Dynamic Data techniques Mini-AOD will be produced and used System performance will be monitored Ongoing preparatory work, including samples preparation for 2B events at 50ns and =40 & 20 and 25ns and =20 24
25
Maria Girone, CERN Data Management milestone: 30 April 2014 Demonstrate the functionality and scale of the improved Data Management system Disk and tape separated to manage Tier-1 disk resources and control tape access and dynamically distributed data Data federation and data access (AAA) Analysis Milestone: 30 June 2014 Demonstrate the full production scale of the new CRAB3 distributed analysis tools Main changes are reducing the job failures in handling of user data products, improved job tracking and automatic resubmission Organized Production Milestone: 31 October 2014 Exercise the full system for organized production Utilize the Agile Infrastructure (IT-CC and Wigner) at 12k cores scale for the Tier-0 Run with multi-core at Tier-0 and Tier-1 for data reconstruction Demonstrate shared workflows to improve latency and speed and test the use of AAA for production 25
26
Maria Girone, CERN MarAprMayJunJulAugSepOctNovDec CSA14 (CRAB): 40% Scale test of CRAB3 (CRAB): 60% Scale test of CRAB3 (CRAB): 80% Scale test of CRAB3 (CRAB): 100% Scale test of CRAB3 (AAA): Client Hosting Test 3 (AAA): Total Chaos Test 4 (T0): 5k cores (T0): 12k cores (T0): replay Express repack & reco (T0): replay Prompt Tier0/1 (T0): replay CSA Scale testing (T1): shared re-reco (T2): MC reprocessi ng (with pre-mixing) (T0/1): multi- core scale test with Pilot Infrastructure Use 1k opportunistic slots Production activity and backfill to optimize infrastructure tape staging test Full scale end-to-end test Pre-produced 13 TeV MC samples Integrated Challenge for AAA and CRAB Data Management MilestoneOrganized Production MilestoneAnalysis Milestone Dynamic Data Placement Ability to use 10k opportunistic slots (T0/1): multi-core scale test CMSSW_7_ 1 (T0): replay PCL (T0): replay with Tape/D isk (T0): replay repea t Full scale end-to- end test Ability to use 20k opportunistic slots (T0): replay skims test Transfer Tests
27
Maria Girone, CERN Computing was a major component in the success of Run I Validation, commissioning, and testing were critical The computing model and techniques for Run II are an evolution, but moving toward much more severe resource constraints There will be a larger trigger rate of higher complexity events within an environment of tightly constrained increases of capacity Operational efficiency, careful planning, and validation will be more important than ever 27
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.