Download presentation
Presentation is loading. Please wait.
Published byHeather Howard Modified over 9 years ago
1
DPS May 11/2002 USCMS CMS-CCS Status and Plans May 11, 2002 USCMS meeting David Stickland
2
DPS May 11/2002 USCMS Slide 2 Outline l Won’t say anything about ORCA, OSCAR, DDD, IGUANA è See talks from Darin, Sarah and Jim. è See Tutorials last week at UCSD è See Paris’s talk to LHCC this Tuesday l Production l LCG and its effect on CMS program l Draft of new schedule
3
DPS May 11/2002 USCMS Slide 3 CMS - Productions and Computing Data Challenges l Already completed è 2000,1: Single site production challenges with up to 300 nodes p ~5 Million events, Pileup for 10 34 è 2000,1: GRID enabled prototypes demonstrated è 2001,2: Worldwide production infrastructure p 11 Regional Centers comprising 21 computing installations p Shared production database, job tracking, sample validation etc p 10M min-bias, simulated-reconstructed-analyzed for calibration studies l Underway Now è Worldwide production 10 million events for DAQ TDR p 1000 CPU’s in use è Production and Analysis planned at CERN and offsite l Being Scheduled è Single Site Production Challenges p Test code performance, computing performance, identify bottlenecks etc è Multi Site Production Challenges p Test Infrastructure, GRID prototypes, networks, replication… è Single- and Multi-Site Analysis Challenges p Stress local and GRID prototypes under quite different conditions to Analysis
4
DPS May 11/2002 USCMS Slide 4 Production Status Prod.CycleSimulationooHitNoPU digi2x1033PU digi1034PU digi RCDone: CERN96%Started INFN100%Started Imperial Coll.89%Started Test in progress UCSD95%Started Test successfulStarted Moscow100%Started Test successful FNAL89%Started- UFL100%Started Test successful Wisconsin97% Test successfulTest in progress Caltech100% Test in progress IN2P3100% Test in progress Bristol/Ral28% Test in progress USMOP0% Done88%61%81%5.%17% Estimate 11/May/02, complete in 17 Days. (June 1 deadline!) Estimate 11/May/02, complete in 17 Days. (June 1 deadline!)
5
DPS May 11/2002 USCMS Slide 5 Production 2002, Complexity Number of Regional Centers11 Number of Computing Centers21 Number of CPU’s~1000 Largest Local Center176 CPUs Number of Production Passes for each Dataset (including analysis group processing done by production) 6-8 Number of Files~11,000 Data Size (Not including fz files from Simulation)15TB File Transfer by GDMP and by perl Scripts over scp/bbcp
6
DPS May 11/2002 USCMS Slide 6 LCG Status l Applications Area è Persistency Framework, established a roadmap for a new software based on ROOT and on an RDBMS layer (Hybrid solution) p Project manager appointed, work starting ! è Defined parameters of an LCG SW Infrastructure group p But so far no people! p And big decision between SCRAM/CMT to be made è Mathlib, indicates requirement for skilled mathlib personnel p Investigating use of resources assigned to LCG by India è MSS, premature for all but ALICE è Detector Description Database p About to start, excellent opportunity for collaboration exists è Simulation p Waiting for G4-HEPURC (We urgently need this body to start work) è Next Month, start an RTAG on Interactive Analysis p Urgent requirement to clarify focus of this activity (Interest in using IGUANA expressed also by some other experiments)
7
DPS May 11/2002 USCMS Slide 7 CMS - Schedule for Challenge Ramp Up l All CMS work to date with Objectivity, Now being phased out to be replaced with LCG Software è Enforced lull in production challenges p No point to do work to optimize a solution being replaced p (But much learnt in past challenges to influence new design) è Use Challenge time in 2002 to benchmark current performance è Aim to start testing new system as it becomes available p Target early 2003 for first realistic tests p Thereafter return to roughly exponential complexity ramp up to reach 50% complexity in 2005 20% Data Challenge, (50% complexity in 2005 is approximately 20% capacity)
8
DPS May 11/2002 USCMS Slide 8 Objectivity Issues l Bleak è CERN has not renewed the Objectivity Maintenance p Old licenses are still applicable, but cannot be migrated to new hardware è Our understanding is that we can continue to use the product as before, clearly without support any longer è Not clear if this applies to any Redhat Version or for that matter other Linux OS’s p Recent contradictory statements from IT/DB l Will become increasingly difficult during this year to find sufficient resources correctly configured for our Objectivity usage. l We are preparing for the demise of our Objectivity-based code by the end of this year è CMS already contributing to the new LCG Software è Aiming to have first prototypes for catalog layer by July è Initial release of CMS prototype ROOT+LCG, September 2002 24/4/02 Now Clear we cannot use on other OS versions
9
DPS May 11/2002 USCMS Slide 9 MetaData Catalog DictionarySvcStreamerSvc PersistencyMgr IReflection StreamerSvcDictionarySvcStorageMgr IPReflection FileCatalog ICnv IReadWrite C++ CacheMgr ICache TFile, TDirectory TSocket TClass, etc. TBuffer, TMessage, TRef, TKey TGrid TTree TStreamerInfo IteratorSvc TChain TEventList TDSet IPers IFCatalog SelectorSvc IMCatalog PlacementSvc IPlacement TFile CustomCacheMgr IPers One possible mapping to a ROOT implementation (under discussion)
10
DPS May 11/2002 USCMS Slide 10 CMS Action to Support LCG l We expect >50% of our ATF (Architecture,Frameworks, Toolkits) effort to be directed to LCG in short/medium term è First person assigned full time to persistency framework the day the work package started è 3-5 more people ready to join work as task develops è Initial emphasis, p build the catalog layer that is missing from ROOT p Remove Objectivity from COBRA/ORCA(OSCAR) p Ensure Simple ROOT storage of objects is working p Aim to have basic catalog services by July, basic COBRA/ORCA/OSCAR using new persistency scheme by September l Try to get our Release Tools (SCRAM) to be adopted by LCG è (Two possibilities CMT (LHCb,ATLAS) or SCRAM) p SCRAM is a better product! è If adopted we would expect to have to put extra effort into supporting a wider community p Aim to get some extra manpower from LCG
11
DPS May 11/2002 USCMS Slide 11 CMS and the GRID l CMS Grid Implementation plan for 2002 published l Close collaboration with EDG and Griphyn/iVDGL,PPDG l Upcoming CMS GRID/Production Workshop (June CMSweek) è File Transfers, Fabrics p Production File Transfer Software Experiences p Production File Transfer Hardware Status & Reports p Future Evolution of File Transfer Tools è Production Tools p Monte Carlo Production System Architecture p Experiences with Tools è Monitoring / Deployment Planning p Experiences with Grid Monitoring Tools p Towards a Rational System for Tool Deployment
12
DPS May 11/2002 USCMS Slide 12 The Computing Model l CMS Computing Model needs updating 1. CMS (and ATLAS) refining Trigger/DAQ rates 2. PASTA process re-costing HW and re-extrapolating “Moore's law” 3. Realistic cost constraints l With above in place, optimize computing model è Need continued development and refinement of MONARC- like tools to simulate and validate computing models l Realistically this will take most of this year
13
DPS May 11/2002 USCMS Slide 13 CCS Manpower l More or less constant over last year, small increase è 52 “names” identified working on CCS tasks è 13 ~Full-Time “Engineers” (All CERN or USA) p Of which 7 in ATF group è ~20 in Worldwide production operations or support è Last detailed plan called for ~ 30 “Engineers” this year (next year with LHC delay) p Delays running at ~ 4 months extra delay per year elapsed OK as long as LHC keeps getting delayed.. On the old schedule we would be getting in to big trouble by now è Use LCG to leverage external manpower, p But no free lunch, CMS probably has the biggest Software group and so will need to contribute proportionately more to the work è Make extra effort to use worldwide manpower p We do this already
14
DPS May 11/2002 USCMS Slide 14 Draft CPT Schedule Change LHC beam Fix 9 months 12 months ~15mo 15 months LC G TD R
15
DPS May 11/2002 USCMS Slide 15
16
DPS May 11/2002 USCMS Slide 16 Summary l Still very hard to make firm plans l Experiment and LCG schedules are being aligned. è No big problems so far è But we do not yet know how much we will have to contribute and how much we will get l We have a slow increase in manpower available, but still most of it is from CERN and USA. Some major parts of the collaboration are still contributing zero to the CCS effort. l If LHC had not slipped, we would be in trouble defining our baseline (and then getting the Physics TDR underway): è Persistency (Transition of 18 months was foreseen, and will be needed) è OSCAR validation, SW product needs restructuring, and PRS not available for physics/detector validation till after DAQ TDR l We are proactively trying to find commonality with any other groups to offload work. è CMS is a major contributor of LHC software, so no free lunch, we still have to do a lot of the work
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.