CMS Software and Computing FNAL Internal Review of USCMS Software and Computing David Stickland Princeton University CMS Software and Computing Deputy.

Slides:



Advertisements
Similar presentations
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Workload Management Massimo Sgaravatto INFN Padova.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
…building the next IT revolution From Web to Grid…
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CMS Computing and Core-Software Report to USCMS-AB (Building a Project Plan for CCS) USCMS AB Riverside, May 18, 2001 David Stickland, Princeton University.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
LHC Computing – the 3 rd Decade Jamie Shiers LHC OPN meeting October 2010.
DPS/ CMS RRB-T Core Software for CMS David Stickland for CMS Oct 01, RRB l The Core-Software and Computing was not part of the detector MoU l.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
1 CMS Virtual Data Overview Koen Holtman Caltech/CMS GriPhyN all-hands meeting, Marina del Rey April 9, 2001.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
Bob Jones EGEE Technical Director
Workload Management Workpackage
EGEE Middleware Activities Overview
US ATLAS Physics & Computing
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
Wide Area Workload Management Work Package DATAGRID project
CMS Software Architecture
Presentation transcript:

CMS Software and Computing FNAL Internal Review of USCMS Software and Computing David Stickland Princeton University CMS Software and Computing Deputy Project Manager

David Stickland, Princeton University CMS Software and Computing - Overview Outline: l Status of Software / Computing Milestones l CMS Software Architecture development plans l CMS Computing plans l CMS Reorganization of Software and Computing Project l Key issues in 2001

David Stickland, Princeton University Strategic Software Choices Modular Architecture (flexible and safe) l Object-Oriented Framework l Strongly-typed interface Uniform and coherent software solutions l One main programming language l One main persistent object manager l One main operating system Adopt standards l Unix, C++, ODMG, OpenGL... Use widely spread, well supported products (with healthy future) l Linux, C++, Objectivity, Qt... Mitigate risks l Proper planning with milestones l Track technology evolution; investigate and prototype alternatives l Verify and validate migration paths; have a fall-back solution ready

David Stickland, Princeton University CARF CMS Analysis & Reconstruction Framework ODBMS Geant3/4 CLHEP Paw Replacement C++ standard library Extension toolkit Reconstruction Algorithms Data Monitoring Event Filter Physics Analysis Calibration Objects Event Objects Configuration Objects Generic Application Framework Physics modules Utility Toolkit Specific Framework CMS adapters and extensions

David Stickland, Princeton University Milestones: Software CMS software development strategy: First, transition to C++, then functionality, then performance Oct 2000

David Stickland, Princeton University Prototypes Mass-production Installation & Commissioning Maintenance & Operation Detector Functional Prototype Fully Functional Production System ComputingHardware Software & Integration Software Life Cycle Design Prototype Test & Integrate Deploy Cyclic Releases 2025

David Stickland, Princeton University Software Development Phases 2: Functional Prototype More complex functionality Integrated into projects Preparation for Trigger and DAQ TDRs Reality Check: ~1% Data Challenge 5: Production System Online / Trigger Systems: 75  100Hz Offline Systems: few Bytes / year 10 9 events / year to look for a handful of (correct!) Higgs Highly distributed collaboration and resources Long lifetime 1: Proof of Concept: End of 1998 Basic functionality Very loosely integrated 3: Fully Functional System Complete Functionality Integration across projects Reality Check: ~5% Data Challenge SW/computing TDR Preparation for Physics TDR 4: Pre-Production System Reality Check: ~20% Data Challenge 2025 Logarithmic

David Stickland, Princeton University Significant Requirements from CMS TDR’s or: “it’s not just an evolution to 2005 software” Major Core Software Milestones = TDR Trigger Dec 2000 DAQ Dec 2001 Software & Computing Dec 2002 Physics Dec

David Stickland, Princeton University Milestones: Data Raw data: 1 MB / event, 100 Hz  1 PB / year + reconstructed data, physics objects, calibration data, simulated data,... Now moving towards distributed production and analysis

David Stickland, Princeton University Milestones: Hardware TDR and MoU

David Stickland, Princeton University Common access for Physicists everywhere Maximize total funding resources while meeting the total computing need Proximity of datasets to appropriate resource l Tier-n Model Efficient use of network bandwidth l Local > regional > national > international Utilizing all intellectual resources l CERN, national labs, universities, remote sites l Scientists, students Greater flexibility to pursue different physics interests, priorities, and resource allocation strategies by region Systems’ complexity  Partitioning of facility tasks, to manage and focus resources Why Worldwide Computing? Regional Center Concept: Advantages

David Stickland, Princeton University Milestone Review Regional Centres l Identify initial candidates06/2000 l Turn on functional centres12/2002 l Fully operational centres06/2005 Central (CERN) systems l Functional prototype12/2002 l Turn on initial systems12/2003 l Fully operational systems06/2005 Need to define intermediate working milestones

David Stickland, Princeton University CMS Regional Centre Prototypes 2003 Candidates Tier1 Tier2 Finland - Helsinki FranceCCIN2P3/Lyon ? India - Mumbai Italy INFN INFN, at least 3 sites Pakistan - Islamabad Russia MoscowDubna UKRAL? US FNALCaltech, U.C.-San Diego,Florida, Iowa, Maryland; Minnesota, Wisconsin and others

David Stickland, Princeton University The Grid Services Concept Standard services that l Provide uniform, high-level access to a wide range of resources (including networks) l Address interdomain issues: security, policy l Permit application-level management and monitoring of end-to-end performance l Perform resource discovery l Manage authorization and prioritization Broadly deployed (like Internet Protocols)

David Stickland, Princeton University Why CMS (HEP) in the GRID? GRID Middleware provides a route towards effective use of distributed resources and complexity management GRID design matches the MONARC hierarchical Model We (HEP) have some Grid-like tools (hand-made). The scale of CMS Computing requires a more professional approach to live for decades. CMS already participates in relevant GRID initiatives, e.g. l The Particle Physics Data Grid (PPDG) [US] Distributed Data Services and Data Grid System Prototypes l Grid Physics Network (GriPhyN ) [US] Production-Scale Data Grids l DATAGRID [EU] Middleware development and Real Applications Test

David Stickland, Princeton University Grid Data Management Prototype (GDMP) Distributed Job Execution and Data Handling: Goals l Transparency l Performance l Security l Fault Tolerance l Automation Submit job Replicate data Replicate data Site A Site B Site C r Jobs are executed locally or remotely r Data is always written locally r Data is replicated to remote sites Job writes data locally GDMP V1.0: Caltech, EU DataGrid, PPDG, GriPhyn; Tests by CALTECH, CERN, FNAL and INFN, for CMS “HLT” Production fall 2000

David Stickland, Princeton University Computing Progress CMS is progressing towards a coherent distributed system, to support production and analysis We need to study the problems and prototyping the solutions for distributed analysis by hundreds of users in many countries Production, via prototypes, will lead to decisions about the architecture on the basis of measured performances and possibilities

David Stickland, Princeton University CMS Software and Computing Evolve the organization to build a complete and consistent Physics Software Recognize cross-project nature of key deliverables l Core Software and Computing CSW&C n More or less what US calls SW&C “Project” l Physics Reconstruction & SelectionPRS n Consolidate Physics Software work between the detector groups targeted at CMS deliverables (HLT design, test-beams, calibrations, Physics TDR.. l Trigger and Data AcquisitionTRIDAS n Online Event Filter Farm

David Stickland, Princeton University Cross-Project Working Groups Core Software and Computing Physics Reconstruction & Selection TRIDAS Reconstruction Project Simulation Project Calibration etc.. Joint Technical Board

David Stickland, Princeton University Key issues for 2001 Choice of baseline for database Ramp-up of Grid R&D and use in production activities Perform HLT data challenge ( ~ 500CPUs at CERN ~500 offsite, 20 TB) Continue work on test beams, and validate simulation Validate OSCAR/Geant4 for detector performance studies Overall assessment, formalization and consolidation of SW systems and processes Work towards Computing MoU in the collaboration