Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.

Slides:



Advertisements
Similar presentations
From Quark to Jet: A Beautiful Journey Lecture 1 1 iCSC2014, Tyler Dorland, DESY From Quark to Jet: A Beautiful Journey Lecture 1 Beauty Physics, Tracking,
Advertisements

Upgrading the CMS simulation and reconstruction David J Lange LLNL April CHEP 2015D. Lange.
CMS Full Simulation for Run-2 M. Hildrith, V. Ivanchenko, D. Lange CHEP'15 1.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
Simulation Project Organization update & review of recommendations Gabriele Cosmo, CERN/PH-SFT Application Area Internal.
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT)
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Roger Jones, Lancaster University1 Experiment Requirements from Evolving Architectures RWL Jones, Lancaster University Ambleside 26 August 2010.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
DOE BER Climate Modeling PI Meeting, Potomac, Maryland, May 12-14, 2014 Funding for this study was provided by the US Department of Energy, BER Program.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
FNAL Geant4 Performance Group Issues and Progress Daniel Elvira for M. Fischler, J. Kowalkowski, M. Paterno.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
BESIII MC Release notes & planned development Dengzy, Hem, Liuhm, Youzy, Yuany Nov. 23, 2005.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
Numerical Libraries Project Microsoft Incubation Group Mary Beth Hribar Microsoft Corporation CSCAPES Workshop June 10, 2008 Copyright Microsoft Corporation,
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
Optimizing CMS Data Formats for Analysis Peerut Boonchokchuay August 11 th,
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
Simulation Commissioning, Validation, Data Quality A brain dump to prompt discussion Many points applicable to any of LHCb software but some simulation.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Computing Performance Recommendations #10, #11, #12, #15, #16, #17.
Software Tools for Layout Optimization (Fermilab) Software Tools for Layout Optimization Harry Cheung (Fermilab) For the Tracker Upgrade Simulations Working.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Update on G5 prototype Andrei Gheata Computing Upgrade Weekly Meeting 26 June 2012.
Ian Bird CERN, 17 th July 2013 July 17, 2013
Preliminary Ideas for a New Project Proposal.  Motivation  Vision  More details  Impact for Geant4  Project and Timeline P. Mato/CERN 2.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
CWG13: Ideas and discussion about the online part of the prototype P. Hristov, 11/04/2014.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Some GPU activities at the CMS experiment Felice Pantaleo EP-CMG-CO EP-CMG-CO 1.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
Introduction to FCC Software FCC Istanbul 11 March, 2016 Alice Robson (CERN/UNIGE) on behalf of / with thanks to the FCC software group.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Resource Optimization for Publisher/Subscriber-based Avionics Systems Institute for Software Integrated Systems Vanderbilt University Nashville, Tennessee.
BES III Software: Beta Release Plan Weidong Li 19 th October 2005.
FTK high level simulation & the physics case The FTK simulation problem G. Volpi Laboratori Nazionali Frascati, CERN Associate FP07 MC Fellow.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Multi-threading and other parallelism options J. Apostolakis Summary of parallel session. Original title was “Technical aspects of proposed multi-threading.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Maria Girone, CERN CMS Status Report Maria Girone, CERN David Lange, LLNL.
Computing models, facilities, distributed computing
SuperB and its computing requirements
Tracker Upgrade Simulations Software Harry Cheung (Fermilab)
Geant4 MT Performance Soon Yung Jun (Fermilab)
for the Offline and Computing groups
Sergei V. Gleyzer University of Florida
Kilohertz Decision Making on Petabytes
Status of Full Simulation for Muon Trigger at SLHC
Scientific Discovery via Visualization Using Accelerated Computing
Workflow and HPC erhtjhtyhy Doug Benjamin Argonne National Lab.
Presentation transcript:

Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction reprocessing, post-CSA14  Mid-week global runs starting in July  Major new features:  Finalized simulation geometry for the 2015 CMS detector  Integration of GEANT 4.10  Deployment of the multi-threaded CMSSW framework  Significant progress in moving to ROOT 6, however it is not yet thread-safe.  CMSSW_7_2_0 release: 24 November 2014  Usage:  Digitization and reconstruction for run 2 startup

Offline Coordinators  Goal for phase 1:  Using 4 threads in a single process achieve roughly the same throughput as 4 processes running on four cores, BUT with much less memory, resulting in much easier computing operations.  In a multi-core era where sufficient memory per core is available to process an entire event this is all we need.  Recently: major progress in percentage of compliant reconstruction code  In a 4-core job, reconstruction workflows now achieve a % throughput increase relative to a single threaded application with only 30-40% increase in RSS size.  Goal for phase 2:  Once many-core solutions are required, we will aim to deploy finer- grained parallelism which could achieve better throughput compared to multiple processes and reduction of long tails.

Offline Coordinators  Adoption of Geant 4.10  In the beginning of Mar. we made a build of Geant 4.10 for testing  A quick validation showed no major problems so we have moved to it as default for more intense validation  Depending on physics process we get a 17-30% speedup  New scheme of mixing for pileup  Create a pileup dataset that contains the correct pileup distribution (25ns vs. 50ns, correct mu) once and reuse it for multiple signal samples  Plan to use this scheme in CSA14 for the most challenging PU scheme of 25ns and mu of 40  Many improvements in the reconstruction  MiniAOD, 10 times smaller then the AOD  benefits from the lessons learned from run 1

Offline Coordinators  There is a significant effort in R&D  More details are available in the slide presented at the last meeting in the backup  Many of us in CMS are very interest in collaboration with the HEP Software Foundation and have contributed white papers.

Offline Coordinators  Backup

Offline Coordinators  Issues driving CMSSW  Technology evolution is a driver for change in CMSSW for the medium/long term  How to improve performance and scalability in throughput/cost for future  How to make continued efficient use of existing resources and open possibilities to access additional resources (e.g., HPC)  Taking first steps to setup software environment for development  Building demonstrators within for software components on new architectures (GPU, ARM, Xeon Phi, etc)  Moving standalone development into CMS environment to ease cost of entry for collaboration involvement  Incorporating infrastructure into CMSSW: profilers, software development tools, etc for new architectures  Developing reconstruction algorithms for high-pileup; Tracking algorithms designed for many-core resources This work is focused in longer term (after Run 2), and is an area to build collaborations (e.g., via the upcoming software collaboration workshop)