The ATLAS Computing Model Roger Jones Lancaster University CHEP06 Mumbai 13 Feb. 2006.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Symantec 2010 Windows 7 Migration EMEA Results. Methodology Applied Research performed survey 1,360 enterprises worldwide SMBs and enterprises Cross-industry.
Symantec 2010 Windows 7 Migration Global Results.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
UNITED NATIONS Shipment Details Report – January 2006.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Addition Facts
Year 6 mental test 5 second questions
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Dr. A.I. Cristea CS 319: Theory of Databases: FDs.
Connect communicate collaborate LHCONE – Linking Tier 1 & Tier 2 Sites Background and Requirements Richard Hughes-Jones DANTE Delivery of Advanced Network.
Database Systems: Design, Implementation, and Management
BT Wholesale October Creating your own telephone network WHOLESALE CALLS LINE ASSOCIATED.
ABC Technology Project
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
© 2012 National Heart Foundation of Australia. Slide 2.
Understanding Generalist Practice, 5e, Kirst-Ashman/Hull
Chapter 5 Test Review Sections 5-1 through 5-4.
Addition 1’s to 20.
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
25 seconds left…...
Week 1.
Chapter 10: The Traditional Approach to Design
Systems Analysis and Design in a Changing World, Fifth Edition
We will resume in: 25 Minutes.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
PSSA Preparation.
SLUO LHC Workshop, SLACJuly 16-17, Analysis Model, Resources, and Commissioning J. Cochran, ISU Caveat: for the purpose of estimating the needed.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
ATLAS Computing Model – US Research Program Manpower J. Shank N.A. ATLAS Physics Workshop Tucson, AZ 21 Dec., 2004.
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
The ATLAS Grid Progress Roger Jones Lancaster University GridPP CM QMUL, 28 June 2006.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
The ATLAS Computing Model Roger Jones Lancaster University CHEP07 Victoria B.C. Sept
The ATLAS Computing & Analysis Model Roger Jones Lancaster University GDB BNL, Long Island, 6/9/2006.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Are We Ready for the LHC Data? Roger Jones Lancaster University PPD Xmas Bash RAL, 19/12/06.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Data Challenge with the Grid in ATLAS
Bernd Panzer-Steindel, CERN/IT
ALICE Computing Model in Run3
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
R. Graciani for LHCb Mumbay, Feb 2006
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

The ATLAS Computing Model Roger Jones Lancaster University CHEP06 Mumbai 13 Feb. 2006

RWL Jones 13 Feb Mumbai 2 Overview Brief summary ATLAS Facilities and their rolesBrief summary ATLAS Facilities and their roles Growth of resourcesGrowth of resources CPU, Disk, Mass Storage Network requirementsNetwork requirements CERN Tier 1 Tier 2 Operational Issues and Hot TopicsOperational Issues and Hot Topics

RWL Jones 13 Feb Mumbai 3 Computing Resources Computing Model fairly well evolved, documented in C-TDRComputing Model fairly well evolved, documented in C-TDR Externally reviewed cc pdfhttp://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lh cc pdf There are (and will remain for some time) many unknownsThere are (and will remain for some time) many unknowns Calibration and alignment strategy is still evolving Physics data access patterns MAY be exercised from June Unlikely to know the real patterns until 2007/2008! Still uncertainties on the event sizes, reconstruction time Lesson from the previous round of experiments at CERN (LEP, )Lesson from the previous round of experiments at CERN (LEP, ) Reviews in 1988 underestimated the computing requirements by an order of magnitude!

RWL Jones 13 Feb Mumbai 4 ATLAS Facilities Event Filter Farm at CERNEvent Filter Farm at CERN Located near the Experiment, assembles data into a stream to the Tier 0 Center Tier 0 Center at CERNTier 0 Center at CERN Raw data Mass storage at CERN and to Tier 1 centers Swift production of Event Summary Data (ESD) and Analysis Object Data (AOD) Ship ESD, AOD to Tier 1 centers Mass storage at CERN Tier 1 Centers distributed worldwide (10 centers)Tier 1 Centers distributed worldwide (10 centers) Re-reconstruction of raw data, producing new ESD, AOD Scheduled, group access to full ESD and AOD Tier 2 Centers distributed worldwide (approximately 30 centers)Tier 2 Centers distributed worldwide (approximately 30 centers) Monte Carlo Simulation, producing ESD, AOD, ESD, AOD Tier 1 centers On demand user physics analysis CERN Analysis FacilityCERN Analysis Facility Analysis Heightened access to ESD and RAW/calibration data on demand Tier 3 Centers distributed worldwideTier 3 Centers distributed worldwide Physics analysis

RWL Jones 13 Feb Mumbai 5 Processing Tier-0:Tier-0: Prompt first pass processing on express/calibration physics stream hours later, process full physics data stream with reasonable calibrations Implies large data movement from T0 T1s Tier-1:Tier-1: Reprocess 1-2 months after arrival with better calibrations Reprocess all resident RAW at year end with improved calibration and software Implies large data movement from T1T1 and T1 T2

RWL Jones 13 Feb Mumbai 6 Analysis model Analysis model broken into two components Scheduled central production of augmented AOD, tuples & TAG collections from ESD Derived files moved to other T1s and to T2s Chaotic user analysis of augmented AOD streams, tuples, new selections etc and individual user simulation and CPU-bound tasks matching the official MC production Modest job traffic between T2s

RWL Jones 13 Feb Mumbai 7 Inputs to the ATLAS Computing Model (1)

RWL Jones 13 Feb Mumbai 8 Inputs to the ATLAS Computing Model (2)

RWL Jones 13 Feb Mumbai 9 Tier 2 view Tier 0 view Data Flow EF farm T0EF farm T0 320 MB/s continuous T0 Raw data Mass Storage at CERNT0 Raw data Mass Storage at CERN T0 Raw data Tier 1 centersT0 Raw data Tier 1 centers T0 ESD, AOD, TAG Tier 1 centersT0 ESD, AOD, TAG Tier 1 centers 2 copies of ESD distributed worldwide T1 T2T1 T2 Some RAW/ESD, All AOD, All TAG Some group derived datasets T2 T1T2 T1 Simulated RAW, ESD, AOD, TAG T0 T2 Calibration processing?T0 T2 Calibration processing?

RWL Jones 13 Feb Mumbai 10 ATLAS partial &average T1 Data Flow (2008) Tier-0 CPU farm T1 Other Tier-1s disk buffer RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AOD2 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day AODm2 500 MB/file Hz 0.34K f/day 2 MB/s 0.16 TB/day RAW ESD2 AODm Hz 3.74K f/day 44 MB/s 3.66 TB/day RAW ESD (2x) AODm (10x) 1 Hz 85K f/day 720 MB/s T1 Other Tier-1s T1 Each Tier-2 Tape RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day disk storage AODm2 500 MB/file Hz 0.34K f/day 2 MB/s 0.16 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AOD2 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm2 500 MB/file Hz 3.1K f/day 18 MB/s 1.44 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm2 500 MB/file Hz 3.1K f/day 18 MB/s 1.44 TB/day ESD1 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day AODm2 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day Plus simulation and analysis data flow

RWL Jones 13 Feb Mumbai 11 Total ATLAS Requirements in for 2008

RWL Jones 13 Feb Mumbai 12 Important points: Discussion on disk vs tape storage at Tier-1sDiscussion on disk vs tape storage at Tier-1s Tape in this discussion means low-access secure storage No disk buffers included except input to Tier 0 Storage of Simulation data from Tier 2sStorage of Simulation data from Tier 2s Assumed to be at T1s Need partnerships to plan networking Must have fail-over to other sites CommissioningCommissioning Requirement of flexibility in the early stages Simulation is a tunable parameter in T2 numbers!Simulation is a tunable parameter in T2 numbers! Heavy Ion running still under discussion.Heavy Ion running still under discussion.

RWL Jones 13 Feb Mumbai 13 ATLAS T0 Resources

RWL Jones 13 Feb Mumbai 14 ATLAS T1 Resources

RWL Jones 13 Feb Mumbai 15 ATLAS T2 Resources

RWL Jones 13 Feb Mumbai 16 Required Network Bandwidth CaveatsCaveats No safety factors No headroom Just sustained average numbers Assumes no years/datasets are junked Physics analysis pattern still under study…

RWL Jones 13 Feb Mumbai 17 T1 CERN Bandwidth I+O Mainly outward data movement The projected time profile of the nominal bandwidth required between CERN and the Tier-1 cloud.

RWL Jones 13 Feb Mumbai 18 T1 T1 Bandwidth I+O About half is scheduled analysis The projected time profile of the nominal bandwidth required T1 and T1 cloud.

RWL Jones 13 Feb Mumbai 19 T1 T2 Bandwidth I+O The projected time profile of the nominal aggregate bandwidth expected for an average ATLAS Tier- 1 and its three associated Tier-2s. Dominated by AOD

RWL Jones 13 Feb Mumbai 20 Issues 1: T1 Reprocessing Reprocessing at Tier 1s is understood in principleReprocessing at Tier 1s is understood in principle In practice, requires efficient recall of data from archive and processing Pinning, pre-staging, DAGs all required? Requires the different storage roles to be well understood

RWL Jones 13 Feb Mumbai 21 Issues 2: Streaming This is *not* a theological issueThis is *not* a theological issue All discussions are about optimisation of data access TDR has 4 streams from event filterTDR has 4 streams from event filter primary physics, calibration, express, problem events Calibration stream has split at least once since! At AOD, envisage ~10 streamsAt AOD, envisage ~10 streams ESD streaming?ESD streaming? Straw man streaming schemes (trigger based) being agreed Will explore the access improvements in large-scale exercises Will also look at overlaps, bookkeeping etc

RWL Jones 13 Feb Mumbai 22 TAG Access TAG is a keyed list of variables/eventTAG is a keyed list of variables/event Two rolesTwo roles Direct access to event in file via pointer Data collection definition function Two formats, file and databaseTwo formats, file and database Now believe large queries require full databaseNow believe large queries require full database Restricts it to Tier1s and large Tier2s/CAF Ordinary Tier2s hold file-based TAG corresponding to locally-held datasets

RWL Jones 13 Feb Mumbai 23 Conclusions Computing Model Data Flow understood for placing Raw, ESD and AOD at Tiered centersComputing Model Data Flow understood for placing Raw, ESD and AOD at Tiered centers Still need to understand data flow implications of Physics Analysis SC4/Computing System Commissioning in 2006 is vital.SC4/Computing System Commissioning in 2006 is vital. Some issues will only be resolved with real data in Some issues will only be resolved with real data in

RWL Jones 13 Feb Mumbai 24 Backup Slides

RWL Jones 13 Feb Mumbai 25 Heavy Ion Running