The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.

Slides:



Advertisements
Similar presentations
The ATLAS Computing Model Roger Jones Lancaster University CHEP06 Mumbai 13 Feb
Advertisements

Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
First results from the ATLAS experiment at the LHC
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
AMI S.A. Datasets… Solveig Albrand. AMI S.A. A set is… A number of things grouped together according to a system of classification, or conceived as forming.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Event Metadata Records as a Testbed for Scalable Data Mining David Malon, Peter van Gemmeren (Argonne National Laboratory) At a data rate of 200 hertz,
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
ATLAS Computing Model – US Research Program Manpower J. Shank N.A. ATLAS Physics Workshop Tucson, AZ 21 Dec., 2004.
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
…building the next IT revolution From Web to Grid…
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS TAGs Database - Experiences and further developments Elisabeth Vinek, CERN & University of Vienna on behalf of the TAGs developers group.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The ATLAS Computing Model Roger Jones Lancaster University CHEP07 Victoria B.C. Sept
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
The ATLAS Computing & Analysis Model Roger Jones Lancaster University GDB BNL, Long Island, 6/9/2006.
TAGS in the Analysis Model Jack Cranshaw, Argonne National Lab September 10, 2009.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
Are We Ready for the LHC Data? Roger Jones Lancaster University PPD Xmas Bash RAL, 19/12/06.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
ATLAS Distributed Computing Tutorial Tags: What, Why, When, Where and How? Mike Kenyon University of Glasgow.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Data Formats and Impact on Federated Access
Database Replication and Monitoring
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Artem Trunov and EKP team EPK – Uni Karlsruhe
ALICE Computing Model in Run3
jeudi 13 septembre 2018jeudi 13 septembre 2018 CMS Tapes Farida Fassi
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006

RWL Jones 13 Sept 2006 Geneva 2 ATLAS Facilities (Steady State) Tier 0 Center at CERNTier 0 Center at CERN Raw data  Mass storage at CERN and to Tier 1 centers Swift production of Event Summary Data (ESD) and Analysis Object Data (AOD) Ship ESD, AOD to Tier 1 centers  Mass storage at CERN Tier 1 Centers distributed worldwide (10 centers)Tier 1 Centers distributed worldwide (10 centers) Re-reconstruction of raw data, producing new ESD, AOD (~2 months after arrival and at year end) Scheduled, group access to full ESD and AOD Tier 2 Centers distributed worldwide (approximately 30 centers)Tier 2 Centers distributed worldwide (approximately 30 centers) On demand user physics analysis of shared datasets Monte Carlo Simulation, producing ESD, AOD, ESD, AOD  Tier 1 centers CERN Analysis FacilityCERN Analysis Facility Heightened access to ESD and RAW/calibration data on demand Calibration, detector optimization, some analysis - vital in early stages Tier 3 Centers distributed worldwideTier 3 Centers distributed worldwide Physics analysis

RWL Jones 13 Sept 2006 Geneva 3 New Straw Man Profile yearenergyluminosityphysics beam time GeV 5x10 30 protons - 26 days at 30% overall efficiency  0.7*10 6 seconds TeV 0.5x10 33 protons - starting beginning July 4*10 6 seconds ions - end of run - 5 days at 50% overall efficiency  0.2*10 6 seconds TeV 1x10 33 protons:50% better than 2008  6*10 6 seconds ions: 20 days of beam at 50% efficiency  10 6 seconds TeV 1x10 34 TDR targets: protons:  10 7 seconds ions:  2* 10 6 seconds

RWL Jones 13 Sept 2006 Geneva 4 Evolution

5 Observations The T2s tend to have too high a cpu/disk ratioThe T2s tend to have too high a cpu/disk ratio Optimal use of the T2 resources delivers lots of simulation with network and T1 disk consequences (although the higher cpu/event will reduce this) The T2 disk only allows about ~60% of the required analysis Other models would seriously increase network traffic GridPP planned disk/cpu balanace is right of courseGridPP planned disk/cpu balanace is right of course But not the current values And plans are plans until funded! Simulation time is crippling - need a real asessment of what is *need*Simulation time is crippling - need a real asessment of what is *need* Bigger ESD means few ESD events accessedBigger ESD means few ESD events accessed

RWL Jones 13 Sept 2006 Geneva 6 Streaming This is an optimisation issueThis is an optimisation issue All discussions are about optimisation of data access TDR had 4 streams from event filterTDR had 4 streams from event filter Primary physics, calibration, express, problem events Calibration stream has split at least once since! Now envisage ~10 streams of RAW, ESD, AODNow envisage ~10 streams of RAW, ESD, AOD Based on trigger bits (immutable) Optimizes access for detector optimisation Straw man streaming schemes to be tested in large-scale exercises Debates between inclusive and exclusive streams (access vs data management) - inclusive may add ~10% to data volumesDebates between inclusive and exclusive streams (access vs data management) - inclusive may add ~10% to data volumes (Some of) All streams to all Tier 1s(Some of) All streams to all Tier 1s Raw to archive blocked by stream and time for efficient reprocessing

RWL Jones 13 Sept 2006 Geneva 7 TAG in Tiers File-based TAGs allow you to access events withing files directlyFile-based TAGs allow you to access events withing files directly Full relational database TAG for selections over large datasetsFull relational database TAG for selections over large datasets Full relational database too demanding for most Tier 2s Expect Tier 2 to hold file-based tag for every local dataset Supports event access and limited dataset definition Tier 1 will be expected to hold full database TAG as well as file formats (for distribution) Tentative plans for queued access to full database version

RWL Jones 13 Sept 2006 Geneva 8 Getting Going Every group should have a Grid User InterfaceEvery group should have a Grid User Interface Ideally one on every desktop This was presented about a year ago to HEP SYSMAN But many groups do not seem to have one Pressure needed from the grass roots?Pressure needed from the grass roots? Users needUsers need A Grid certificate Join the ATLAS Virtual Organisation

RWL Jones 13 Sept 2006 Geneva 9 Analysis Resources In terms of non-local UK resources, we are already in the Grid EraIn terms of non-local UK resources, we are already in the Grid Era UK resources are asked for centrally via GridPP These are dominated by production tasks for ATLAS Some additional capacity for analysis and group activity All of this is Grid based - no nfs disk, no local submission If UK groups have identified needs that are not in the ATLAS central planning, please justify it and send it to me We need to know >3 months in advance Quota and fair-share technologies are being rolled-out, but at present people must be responsible This is not an infinite resource Using large amounts of storage can block the production

RWL Jones 13 Sept 2006 Geneva 10 Computing System Commissioning This is a staged series of computing exercisesThis is a staged series of computing exercises Analysis is a vital componentAnalysis is a vital component Need people doing realistic analysis by the Spring If we don’t get the bugs found then, physics will suffer Interesting events mixed with background Data dispersed across sites

RWL Jones 13 Sept 2006 Geneva 11 Conclusions Computing Model Data well evolved for placing Raw, ESD and AOD at Tiered centersComputing Model Data well evolved for placing Raw, ESD and AOD at Tiered centers Still need to understand all the implications of Physics Analysis Distributed Analysis and Analysis Model Progressing well But at present, data access is not fit for purpose (action underway) A large ESD blows-up the model CPU/Disk imbalances really distort the model The large simulation time per event is crippling in the long term SC4/Computing System Commissioning in 2006 is vital.SC4/Computing System Commissioning in 2006 is vital. Some issues will only be resolved with real data in Some issues will only be resolved with real data in