Download presentation
Presentation is loading. Please wait.
Published byMiles Dennis Modified over 9 years ago
1
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM
2
DPS May/15/2001 LHC52 Slide 2 CCS Core Computing & Software PRS Physics Reconstruction and Selection TriDAS Online Software 1. Computing Centres 2. General CMS Computing Services 3. Architecture, Frameworks / Toolkits 9. Tracker / b-tau 8. Online Farms 7. Online Filter Software Framework 6. Production Processing & Data Management 5. Software Process and Quality 4. Software Users and Developers Environment 10. E-gamma / ECAL 11. Jets, Etmiss/HCAL 12. Muons SPROM (Simulation Project Management) RPROM (Reconstruction Project Management) GPI (Group for Process Improvement)…recently created CPROM (Calibration Project Management)…to be created Cafe (CMS Architectural Forum and Evaluation) CPT Project
3
DPS May/15/2001 LHC52 Slide 3 Developing a CCS Project Plan l Build a common planning base for all CPT tasks l Clarify responsibilities l Coordinate milestones l March 2001 planning: (http://cmsdoc.cern.ch/cms/cpt/april01-rrb)http://cmsdoc.cern.ch/cms/cpt/april01-rrb è Task Breakdown, Deliverables, Cross-projects l Next: Milestone study è Top Down p Starting from major deliverables è Bottom up p Starting from current project understanding è External Constraints p DAQ TDR, Physics TDR, CCS TDR, Data Challenges, LHC timetable etc Without this it is impossible to measure performance, assign limited resources effectively, identify conflicting constraints etc
4
DPS May/15/2001 LHC52 Slide 4 Computing and Software: Critical Dates l Technical Design Reports è End 2002:DAQ TDR 7M events now, +5M Y2001, +10M Y2002 è End 2003:CCS TDR. Describe system to be implemented è Mid 2004: Physics TDR: GEANT4, All Luminosities, 20+M Events (?) p A Primary Goal: Prepare the collaboration for LHC analysis, shake-down the tools, computing systems, software è End 2005: ~ 20% Computing in place ready for Pilot Run Spring 2006 l Computing milestones è End 2004: 20% Data challenge. Final test before purchase of production systems. p Test of offline, post-DAQ, (Level-2 trigger? Calibrations? Alignments…) p 20Hz for One month (reconstructed, distributed, analyzed) (40M Events)
5
DPS May/15/2001 LHC52 Slide 5 Currently reviewing CCS Milestones Shown at Nov 2000, LHCC Comprehensive Review l Milestone waves l Not easily reviewable l Need more detail l Not tied to deliverables l The work required to satisfy the milestone is typically not described by the milestone so may not be properly monitored or tracked
6
DPS May/15/2001 LHC52 Slide 6 TDR’s and Challenges (Preliminary)
7
DPS May/15/2001 LHC52 Slide 7 Current Computing Activity l Spring 2001: è CERN: 200-300 CPU’s, new Objectivity version, new Tape/MSS system, new Data-servers p Currently (best) 70MB/s out of Objectivity Testing to determine where next bottleneck is: Disk access, Network, Federation locks… p 1TB output data in 3 days – to be used by ECAL-e/ PRS group Currently running Calo+Tracker digitization at 10**34 Will write about 6TB p 200 CPU nodes in single federation p Integrated with CASTOR Though not as transparently as we plan for next round p Testing ATA/3Ware EIDE Disk systems for data servers (input and output) Sustained productions achieved è FNAL has responsibility for the JetMET datasets, INFN for the Muon. Continuing to ramp productions, consolidate tools, more automation etc..
8
DPS May/15/2001 LHC52 Slide 8 Common Prototypes: CMS Computing, 2002-2004 l Double the complexity (number of boxes) each year to reach 50% of final complexity of a single expt. in 2004, before production system purchasing l Match Computing Challenges with CMS Physics and Detector Milestones CERN prototype is a time-shared facility available for ~30% of the time at full power for CMS DAQ TDR CCS TDR Physics TDR 20% Data Challenge Some (~50%) of current T2 prototypes primarily for GRID related R&D. Prototype and final size/cost document: http://cmsdoc.cern.ch/cms/cpt/april01-rrbhttp://cmsdoc.cern.ch/cms/cpt/april01-rrb
9
DPS May/15/2001 LHC52 Slide 9 Long Term Plan: Computing Ramp-up l Ramp Production systems 05-07 (30%,+30%,+40% of cost each year) l Match Computing power available with LHC luminosity 2006 200M Reco ev/mo 100M Re-Reco ev/mo 30k ev/s Analysis 2007 300M Reco ev/mo 200M Re-Reco ev/mo 50k ev/s Analysis
10
DPS May/15/2001 LHC52 Slide 10 Current most significant risk to the project is insufficient SW manpower l We are making good use of the resources we have and making progress: è OO code is deployed and is the standard for CMS è Worldwide productions è Full use of prototype facilities p Leading to improved code and understanding of limitations è A solid SW Infrastructure base is in place l But there are many things we are unable to cover adequately: è No Calibration infrastructure è No Alignment infrastructure è Detector Description Database only just getting underway è Analysis infrastructure not yet deployed è Slow progress with our GEANT4 implementation è Unable (time!) to answer all the (good) questions the GRID projects are asking us è “Spotty” user-support p Best effort, when time permits è Most of the tasks in SW Quality Assurance and Control are unmanned è Unacceptably high exposure to loss of key people p No backups in any role è Etc etc….
11
DPS May/15/2001 LHC52 Slide 11 Next Steps l We continue to build a project plan for CCS l We continue to put in place an IMoU for the SW Manpower è In the meantime we focus action to actually get the manpower l We clearly define our prototype requirements è Those Prototypes may be supplied within an IMoU context, or within a broader context of collaboration towards LHC Computing l We try to work with CERN to ensure the experiments and the Regional centers are the driving partners in any new projects and that our real needs are addressed
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.