The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
CMS Alignment and Calibration Yuriy Pakhotin on behalf of CMS Collaboration.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Distributed Computing and Data Analysis for CMS in view of the LHC startup Peter Kreuzer RWTH-Aachen IIIa International Symposium on Grid Computing (ISGC)
Operational Experience with CMS Tier-2 Sites I. González Caballero (Universidad de Oviedo) for the CMS Collaboration.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
NTOF DAQ status D. Macina (EN-STI-EET) Acknowledgements: EN-STI-ECE section: A. Masi, A. Almeida Paiva, M. Donze, M. Fantuzzi, A. Giraud, F. Marazita,
Claudio Grandi INFN Bologna CERN - WLCG Workshop 13 November 2008 CMS - Plan for shutdown and data-taking preparation Claudio Grandi Outline: Global Runs.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Maria Girone, CERN CMS Experiment Status, Run II Plans, & Federated Requirements Maria Girone, CERN XrootD Workshop, January 27, 2015.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 Status of Tracker Alignment b-tagging Workshop Nhan Tran (JHU) On behalf of the Tracker Alignment Group.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Evolution of Monitoring System for AMS Science Operation Center
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Status of the CERN Analysis Facility
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Status and Prospects of The LHC Experiments Computing
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Rainer Mankel (DESY) for the CMS Collaboration
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
The LHCb Computing Data Challenge DC06
Presentation transcript:

The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf of the CMS Offline & Computing project An Example of CAF Workflow (CMS Tracker Alignment) Alignment & Calibration results CAF pool data transfers CAF Utilisation CAF Jobs and Users 2008 User storage 1.2 Petabytes Storage disk Multiple Queues Fair share Priority Queues 700 Cores, accessed via LSF Interactive access Disk only (CASTOR) Manual Space Mgmt Distrib. Analysis CASTOR Pool : 216 nodes Group share 2TB AFS space Commissioning dedicated CASTOR user pool (50TB) Mostly 8 core WNs with 2GB memory/core Express queue for exteme priority work Memory intensive jobs use slots with 4GB/job Special batch queue which allows dedicated interactive sessions (up to 20 cores per user) User list managed via web interface by stakeholders Standard CMS analysis tool available for CAF usage Computing Infrastructure Resource Ramp up 2008 CHEP’09 Conference, PRAGUE (Czech Republic) Mar 2009 Peak data transfer rate  Regularly reaching input rates >2.5 GB/s sustained during 1 hour  This is a Disk to Disk rate Average data transfer rate  CAF can receive transfers from the T0 and from T1 sites.  During Fall 08 run, reached an average input rate of 112 MB/s [Month 08] [Nb Users] 268 Users CAF in Prompt Data Flow [Month 08] [Nb Jobs] * Express reconstruction on O(10%) quickly, then full pass after 24h AlCaReco: data for alignment and calibration T0 Store T1 Sites CMS Centre Monitoring cmscaf LSF queue: 635 job slots Free Space on CAF CASTOR pool  Disk-only CASTOR pool for fast data access  Dynamic disk space monitoring/alarming needed  Data deletions triggered by CAF Data Managers, using central CMS Data Management tools Running/Pending Jobs batch queue  Job statistics from the cmscaf LSF queue during Fall 08 data taking max jobs running : 635 average job slots usage: 67% [h] [Mbytes/s] [Days] [Mbytes/s] [h] [Jobs] [TB] [Month 08] [Job Slots] [Month 08]  Ramp up and Commissioning in Spring 08  Cosmic data taking in Fall 08  Factor x1.8 additional CPU in 2009  Reached >500k jobs/month during Fall 08 data taking  Dedicated to high-priority workflows (not useable by every CMS user)  Today nearly 300 active CAF users  Monitor/Control user activity is non-trivial  In Fall 2008 CMS ran 4 weeks continuously  Acquired ~300M Cosmic events with magnetic field at B=3.8 Tesla  Good opportunity to test CAF workflows  Example: mean of residual distributions in Tracker Primary Alignment Producer Condensed Track data Global Millepede fit Alignment Producer AlCaReco Misaligned Geometry Step 1: track-level analysis & track-by- track matrix elements: Step 2: global fit of alignment parameters parallelised across many CPUs dedicated millepede server to support memory-intensive fit …… AlCaReco Aligned Geometry Constants CAF Condensed Track data Condensed Track data Primary Alignment Producer [h] [Tbytes]  Alignment and Calibration  Trigger/detector diagnostics, monitoring and performance analysis  Physics monitoring, analysis of express streams, fast-turnaround high-priority analysis TIER-0 TIER-1 CAF TIER-2 600MB/s HLT + Storage Manager TIER-1 CMS Detector 450MB/s Max rate-in : 3.5 GB/s Plateau rate-out : ~2GB/s 26  m High priority users only